TITLE: Notation: is it proper to multiply matrix with a vector represented using an n-tuple $\begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix}(x_1, x_2)$? QUESTION [5 upvotes]: I've recently noticed talking to a few classmates from another school in undergrad engineering, they are denoting a vector using n-tuple i.e. $x = (x_1, x_2, x_3, \ldots, x_n) \in \mathbb{R}^n$ It is fine with me but then I noticed that they would proceed to multiply a matrix (for instance a $2 \times 2$ matrix $A$) as: $Ax = \begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix} x = \begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix}(x_1, x_2)$ I was raised to denote $x$ as a column vector, so $Ax = \begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix} x = \begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}$ feels more comfortable to me. Truthfully, it also feels more correct. Would $Ax = \begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix} x = \begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix}(x_1, x_2)$ be considered to be correct as well? Is it good practice? REPLY [4 votes]: In at least one course that I've been to the $n$-tuple notation $(x_1,\dots,x_n)$ was used as a convenient way to write column vectors on one line. So in that course our convention was $$(x_1,\dots,x_n)=\begin{pmatrix}x_1\\ \vdots\\ x_n\end{pmatrix}$$ $$(x_1,\dots,x_n)\neq\begin{pmatrix}x_1& \dots& x_n\end{pmatrix}$$ In other words adding commas turns a row into a column (which is a bit confusing but saves space on the page). In this case multiplying by a matrix on the left would make perfect sense.<|endoftext|> TITLE: Formal Power Series as Linear Operators QUESTION [6 upvotes]: Let $t^k$ act as the $k$-th derivative operator on the set of polynomials. So $$t^k(x^n)=t^k x^n=(n)_kx^{n-k}$$ where $(n)_k=n(n-1)(n-2)...(n-k+1)$ is the falling factorial. Then with a formal power series, $f(t)=\sum_{k\ge 0}a_k\frac{t^k}{k!}$, the linear operator $f(t)$ acts as such that $$f(t)(x^n)=f(t)x^n=\sum_{k=0}^n\binom{n}{k}a_k x^{n-k}$$ Therefore, depending on the coefficients of the power series, we can get some interesting binomial identites. For example, if $f(t)=e^{yt}$, since the coefficients $a_n=y^n$, we get $$e^{yt}x^n=\sum_{k=0}^n\binom{n}{k}y^k x^{n-k}=(x+y)^n$$ by linearity, $$(e^{yt}-1)x^n=(x+y)^n-x^n=\sum_{k=1}^{n}\binom{n}{k}y^k x^{n-k}$$ and perhaps not as obvious $$\left(\frac{e^{yt}-1}{t}\right)x^n=\int_{x}^{x+y}u^ndu$$ Now suppose that $f(t)=e^{yt}-1-yt$. Then $$(e^{yt}-1-yt)x^n=(x+y)^n-x^n-ynx^{n-1}=\sum_{k=2}^{n}\binom{n}{k}y^k x^{n-k}$$ Obviously there is a nice formed forward difference equation in the previous case that is not happening here. But there is a relationship with subtracted terms of the binomial expansion. What i would really like help understanding is whether or not a possible analogous integral representation exists for the following operator: $$\left(\frac{e^{yt}-1-yt}{t^2}\right)x^n=\left(\sum_{k=0}^\infty\frac{y^{k+2}}{(k+2)(k+1)}\frac{t^k}{k!}\right)x^n=\sum_{k=0}^n\binom{n}{k}\frac{y^{k+2}}{(k+1)(k+2)}x^{n-k}$$ $$=\sum_{k=0}^n\binom{n+2}{k+2}\frac{y^{k+2}}{(n+1)(n+2)}x^{n-k}=\frac{1}{(n+1)(n+2)}\sum_{k=2}^{n+2}\binom{n+2}{k}y^kx^{n+2-k}$$ It is not as simple. Clearly $\frac{d^2}{dx^2}\frac{x^{n+2}}{((n+2)(n+1)}$. If I integrated below I think the math is correct $$\int_x^{x+y}{\frac{u^{n+1}}{n+1}}du=\frac{1}{(n+1)(n+2)}\sum_{k=1}^{n+2}\binom{n+2}{k}y^kx^{n+2-k}$$ Which is really close, but the lower bound on the summation is $1$, not $2$. Does any one have any insight in how i can fix this, if possible? REPLY [3 votes]: Note: OPs calculations are quite ok and it shows the operators are closely related, but different. I don't think there is a necessity to fix anything. I skimmed through the classic The Umbral Calculus by Steven Roman, but there was no indication that something more is going on regarding OPs question. Another source I've checked without success was The Calculus of Finite Differences by C. Jordan. It might be helpful to list a few higher powers of the operators under consideration. In the following I use OPs notation which is precisely the same used by Steven Roman. Translation operator: $e^{yt}$ Since this operator satisfies: \begin{align*} e^{yt}x^n&=\sum_{k=0}^\infty \frac{y^k}{k!}t^kx^n =\sum_{k=0}^n \frac{y^k}{k!}(n)_kx^{n-k}=\sum_{k=0}^n\binom{n}{k}y^kx^{n-k}\\ &=(x+y)^n \end{align*} we obtain \begin{align*} \left(e^{yt}\right)^2 x^n=e^{yt}(x+y)^n=(x+2y)^n \end{align*} and in general for $j\geq 1$ \begin{align*} e^{jyt}x^n=(x+jy)^n \end{align*} Since $x^n, n\geq 0$ form a basis of the vector space of all polynomials $p$ in a single variable $x$ and the translation operator is linear, we obtain \begin{align*} e^{jyt}p(x)=p(x+jy)\tag{1} \end{align*} hence the name translation operator. Forward difference operator: $e^{yt}-1$ Here we obtain for polynomials $p$ using (1) \begin{align*} \left(e^{yt}-1\right)p(x) = p(x+y)-p(x) \end{align*} The next one is Operator: $\frac{\exp(yt)-1}{t}$ We obtain \begin{align*} \left(\frac{e^{yt}-1}{t}\right)x^n&=\sum_{k=1}^\infty \frac{y^k}{k!}t^{k-1}x^n\\ &=\sum_{k=1}^{n+1}\frac{y^k}{k!}(n)_{k-1}x^{n-(k-1)}\\ &=\frac{1}{n+1}\sum_{k=1}^{n+1}\binom{n+1}{k}y^kx^{n+1-k}\\ &=\frac{1}{n+1}\left((x+y)^{n+1}-x^{n+1}\right)\tag{2}\\ &=\frac{1}{n+1}\int_x^{x+y}u^n\,du \end{align*} Similarly we obtain from (2) by linearity \begin{align*} \left(\frac{e^{yt}-1}{t}\right)^2x^n &=\left(\frac{e^{yt}-1}{t}\right)\frac{1}{n+1}\left((x+y)^{n+1}-x^{n+1}\right)\\ &=\frac{1}{n+1}\left[\frac{1}{n+2}\left((x+2y)^{n+2}-(x+y)^{n+2}\right)\right.\\ &\qquad\qquad\quad\left.-\frac{1}{n+2}\left((x+y)^{n+2}-x^{n+2}\right)\right]\\ &=\frac{1}{(n+1)(n+2)}\left((x+2y)^{n+2}-2(x+y)^{n+1}+x^{n+2}\right)\\ &=\frac{1}{(n+2)_2}\left(\int_{x+y}^{x+2y}u\, du-\int_x^{x+y}u\,du\right)\tag{3} \end{align*} Here at (2) and (3) we can see quite nicely how the operator $\frac{\exp(yt)-1}{t}$ is connected with the integral operator. It can be extended to higher powers without too much effort and the relationship with the integral operator looks plausible. Operator: $\frac{\exp(yt)-1-t}{t^2}$ Now we take a look at the operator which is on the focus of OP and its generalisation. \begin{align*} \left(\frac{e^{yt}-1-t}{t^{2}}\right)x^n &=\sum_{k=2}^\infty\frac{y^k}{k!}t^{k-2}x^n\\ &=\sum_{k=2}^{n}\frac{y^{k}}{k!}(n)_{k-2}x^{n-(k-2)}\\ &=\frac{1}{(n+2)_2}\sum_{k=2}^n\binom{n+2}{k}y^kx^{n+2-k}\\ &=\frac{1}{(n+2)_2}\left((x+y)^{n+2}-x^{n+2}-nyx^{n+1}\right) \end{align*} Comparing the final expression with (3) we do not see a plausible representation via integrals since the term $nyx^{n+1}$ don't provide anything nicely of the form \begin{align*} \text{integrated expression (end point) - integrated expression (starting point)} \end{align*} This impression becomes more strongly when looking at the general case. We obtain for $j\geq 1$ \begin{align*} \left(\frac{e^{yt}-1-\frac{t^2}{2}-\cdots-\frac{t^{j-1}}{(j-1)!}}{t^j}\right)x^n &=\sum_{k=j}^\infty\frac{y^k}{k!}t^{k-j}x^n\\ &=\frac{1}{(n+j)_j}\sum_{k=j}^\infty\binom{n+j}{k}y^kx^{n+j-k}\\ &=\frac{1}{(n+j)_j}\left((x+y)^{n+j}-\sum_{k=0}^{j-1}\binom{n+j}{k}y^kx^{n+j-k}\right) \end{align*}<|endoftext|> TITLE: Galois group of $f(x^2)$ QUESTION [8 upvotes]: If the Galois group of some irreducible polynomial $f(x)$ over some field $F$ is known, is there some method for calculating the Galois group of the polynomial $f(x^2)$ over the same field? REPLY [8 votes]: The two groups are related in the following sense: assuming that $ f \in F[x] $ is separable with $ \operatorname{char} F \neq 2 $, if $ L/F $ is the splitting field of $ f(x) $ and $ M/F $ is the splitting field of $ f(x^2) $ in some fixed algebraic closure $ \bar{F} $, then $ L $ is a subfield of $ M $. Identifying $ H = \textrm{Gal}(L/F) $ and $ G = \textrm{Gal}(M/F) $, we observe that $ H $ is a quotient of $ G $, that is, there is a surjective homomorphism $ G \to H $ given by restriction to $ L $. Furthermore, the extension $ M/L $ is obtained by adjoining square roots of elements to $ L $, therefore the Galois group $ \textrm{Gal}(M/L) $ admits a particularly simple decomposition: it is the direct sum of finitely many copies of $ \mathbf Z/2\mathbf Z $. Thus, we see that the Galois groups $ G $ and $ H $ fit into a short exact sequence $$ 0 \to (\mathbf Z/2\mathbf Z)^n \to G \to H \to 0 $$ where $ [M:L] = 2^n $. I do not think more can be inferred, in general, about the group $ G $ from the structure of the group $ H $.<|endoftext|> TITLE: Same Diagonal Dissection QUESTION [6 upvotes]: Divide a rectangle into smaller rectangles with two criteria: All sub-rectangles must have different sizes. All sub-rectangles must have diagonals with length 1. What is the smallest possible number of rectangles in a solution? Here is a solution where a square is divided into 12 rectangles with unit diagonals. The $x$ values are 0., 0.126115, 0.45009, 0.767632, 1.12506, 1.43832, 1.74608, 1.885, and the $y$ values are 0., 0.04595, 0.783796, 0.992016, 0.990303, 1.73204, 1.885. Code for 100 digits of accuracy: NMinimize[{(-1+(x1-x4)^2+y1^2)^2+(-1+(x2-x4)^2+(y1-y2)^2)^2+(-1+(x4-x6)^2+y2^2)^2+(-1+(x1-x2)^2+(y1-y3)^2)^2+(-1+x1^2+y3^2)^2+(-1+(x3-x6)^2+(y2-y4)^2)^2+(-1+(x6-x7)^2+y4^2)^2+(-1+(x2-x3)^2+(y2-y5)^2)^2+(-1+(x3-x5)^2+(y4-y5)^2)^2+(-1+x2^2+(y3-y6)^2)^2+(-1+(x5-x7)^2+(y4-y6)^2)^2+(-1+(x2-x5)^2+(y5-y6)^2)^2, 1/9100, AccuracyGoal->100, MaxIterations->100, WorkingPrecision->100] The below is a earlier, slightly flawed version of the same dissection that was found by hand. The diagonals form two connected sets, colored red and green here. Is there a solution with a fewer number of rectangles? Can a rectangle be divided into 11 or fewer different unit-diagonal rectangles? REPLY [4 votes]: First, I'll show example of such $9$-tiling with more comfortable side lengths: Tiling: its pattern: its sub-rectangles sizes: $x_1 = 0.884, y_1 \approx 0.46748690;$ $x_2 \approx 0.84923220, y_2 \approx 0.52801957;$ $x_3 \approx 0.99816622, y_3 \approx 0.06053267;$ $x_4 \approx 0.11416622, y_4 \approx 0.99346166;$ $x_5 \approx 0.85050017, y_5 \approx 0.52597476;$ $x_6 \approx 0.88273202, y_6 \approx 0.46987676;$ $x_7 \approx 0.06482983, y_7 \approx 0.99789633;$ $x_8 \approx 0.94756185, y_8 \approx 0.31957242;$ $x_9 \approx 0.96466638, y_9 \approx 0.26347442.$ (here $x_1=0.884$ is chosen arbitrarily from the $(0.8686, 0.8946)$ $-$ see below). And note that there are no "fixed"/unique tiling; they exist as $1$-parametric families of tilings: Now wider description: It seems there exists "regular" (maybe elegant :) algorithm. Step 1: let's call $n$-tiling "irreducible" if any removing (of $1$ or more sub-rectangles) makes this tiling non-rectangular (excluding trivial $n-1$ sub-rectangles removing). For considered $n$ we'll find all possible "irreducible" tiling patterns, which allow construct all sub-rectangles of different sizes: $n=5$: 1 possible pattern: $n=6$: there are no "irreducible" tilings: $n=7$: $2$ possible patterns: (Let's focus on $7$-tilings.) Step 2: Enumerate sub-rectangles such that $1$th and $2$nd sub-rectangles are "free", and all further sub-rectangles depend on these two: let we have $2$ free parameters: $$ a \in(0,1),\; b\in(0,1); $$ $7$-tiling A: running forward, we mark such sides as "free", which would give us more comfort calculations: $ y_1 = a, \; x_1 = \sqrt{1-a^2}; \\ x_2 = b, \; y_2 = \sqrt{1-b^2}; \\ x_3 = x_1-x_2, \; y_3 = \sqrt{1-x_3^2}; \quad (x_3>0); \\ y_4 = y_2-y_3, \; x_4 = \sqrt{1-y_4^2}; \quad (y_4>0); \\ x_5 = x_3-x_4, \; y_5 = \sqrt{1-x_5^2}; \quad (x_5>0); \\ y_6 = y_4-y_5, \; x_6 = \sqrt{1-y_6^2}; \quad (y_6>0); \\ x_7 = x_6-x_5, \; y_7 = \sqrt{1-x_7^2}; \quad (x_7>0); $ and close these dependencies by the equation: $ y_1+y_3+y_5=y_7. $ Now, we have $16$ variables $a,b,x_1,y_1,\ldots,x_7,y_7$ and $15$ equations. It describes some $2D$-curve: $$ F_1(a,b)=0, \;\mbox{ or } b = f_1(a) \mbox{ in our case }, \; \; a\in (a_1,a_2). $$ for this pattern we'll have $a_1 = 0, a_2 \approx 0.0732518$. Plot of the curve (Mathematica): y1 := a; x1 := Sqrt[1 - a^2]; x2 := b; y2 := Sqrt[1 - b^2]; x3 := x1 - x2; y3 := Sqrt[1 - x3^2]; y4 := y2 - y3; x4 := Sqrt[1 - y4^2]; x5 := x3 - x4; y5 := Sqrt[1 - x5^2]; y6 := y4 - y5; x6 := Sqrt[1 - y6^2]; x7 := x6 - x5; y7 := Sqrt[1 - x7^2]; ContourPlot[y1 + y3 + y5 == y7, {a, 0.000, 0.08}, {b, 0.000, 0.08}, GridLines -> {0.01*Range[0, 10], 0.01*Range[0, 10]}] A few points of this curve: $ (a,b)\approx (0, 0.004217507); \\ (a,b)\approx (0.01, 0.003927340); \\ (a,b)\approx (0.02, 0.003547516); \\ (a,b)\approx (0.03, 0.003077705); \\ (a,b) \approx (0.0307543, 0.00303861); \mbox{ see Ivan Neretin's answer} \\ (a,b)\approx (0.04, 0.002517544); \\ (a,b)\approx (0.05, 0.001866428); \\ (a,b)\approx (0.06, 0.001124581); \\ (a,b)\approx (0.07, 0.000290911); \\ (a,b)\approx (0.0732518, 0). $ So this kind of tiling must have shortest sub-rectangle side less than $\approx0.004$. $7$-tiling B: similar way; but this pattern gives us tiling with pairs of equivalent sub-rectangles. $5$-tiling: gives us tiling with pairs of equivalent sub-rectangles. $8$-tilings: there are a few patterns with successful tilings, but all them (which I considered) have too small (for viewing) smallest sidelength. $9$-tilings: The presented above tiling can be described by the system of equations: $x_1 = a, \; y_1 = \sqrt{1-x_1^2};$ $x_2 = b, \; y_2 = \sqrt{1-x_2^2};$ $y_3 = y_2-y_1, \; x_3 = \sqrt{1-y_3^2}; \quad (y_3>0);$ $x_4 = x_3-x_1, \; y_4 = \sqrt{1-x_4^2}; \quad (x_4>0);$ $y_5 = y_4-y_1, \; x_5 = \sqrt{1-y_5^2}; \quad (y_5>0);$ $x_6 = x_2+x_3-x_4-x_5, \; y_6 = \sqrt{1-x_6^2}; \quad (x_6\in(0,1)\: );$ $y_7 = y_2+y_6, \; x_7 = \sqrt{1-y_7^2}; \quad (y_7<1);$ $x_8 = x_6+x_7, \; y_8 = \sqrt{1-x_8^2}; \quad (x_8<1);$ $y_9 = y_7+y_8-y_2-y_5, \; x_9 = \sqrt{1-y_9^2}; \quad (y_9\in(0,1)\:);$ and $x_4+x_5=x_9$, where $a\in(0,1)$, $b\in(0,1)$, and $(a,b)$ are linked therefore by certain dependency $F_2(a,b)=0$, or $b= f_2(a)$ in our case, where $a\in(a_1,a_2)$; $a_1 \approx 0.868517092, a_2 \approx 0.8946$. And a few points of the curve: $ (a,b) \approx (0.868517092, 0.868517091); \\ (a,b) \approx (0.870, 0.86678506); \\ (a,b) \approx (0.875, 0.86073752); \\ (a,b) \approx (0.880, 0.85443240); \\ (a,b) \approx (0.885, 0.84791329); \\ (a,b) \approx (0.890, 0.84122558); \\ (a,b) \approx (0.8946, 0.83497558). $<|endoftext|> TITLE: Did I derive a new form of the gamma function? QUESTION [29 upvotes]: I wish to extend the factorial to non-integer arguments in a unique way, given the following conditions: $n!=n(n-1)!$ $1!=1$ To anyone interested in viewing the final form before reading the whole post: $$x!=\exp\left[\int_0^x\left(-\gamma+\int_0^1\frac{1-t^\phi}{1-t}dt\right)d\phi\right]$$ $$f(x):=\ln(x!)$$ $$f(x)=\ln(x!)=\ln(x)+\ln((x-1)!)=\ln(x)+f(x-1)$$ $$f(x)=f(x-1)+\ln(x)$$ $$\frac d{dx}f(x)=\frac d{dx}f(x-1)+\ln(x)$$ $$f'(x)=f'(x-1)+\frac1x\tag1$$ $$f'(x)=f'(x-2)+\frac1{x-1}+\frac1x$$ $$=f'(0)+1+\frac12+\frac13+\dots+\frac1x$$ for $x\in\mathbb N$: $$f'(x)=f'(0)+\sum_{n=1}^x\frac1n\tag2$$ Euler has a nice extension of the harmonic numbers to non-integer arguments, $$f'(x)=f'(0)+\int_0^1\frac{1-t^x}{1-t}dt\tag{2.1}$$ from the FTOC we have $$\ln(x!)=\int_0^x\left(f'(0)+\int_0^1\frac{1-t^\phi}{1-t}dt\right)d\phi$$ $$x!=\exp\left[\int_0^x\left(f'(0)+\int_0^1\frac{1-t^\phi}{1-t}dt\right)d\phi\right]\tag3$$ And with $f'(0)=-\gamma$, the Euler mascheroni constant, we should get the gamma function. Or we may just let it sit as an unknown parameter. My questions are if this captures all possible extensions of the factorial with the given conditions, since, if it did, it'd be a pretty good general extension to the factorial? Given a few more assumptions, it is easy enough to set bounds to what $f'(0)$ might be as well. Notably, this representation fails when considering $\Re(x)\le-1$, but coupled with the first condition, it is extendable to all $x$, except of course the negative integers. robjohn♦ notes an extension to the harmonic numbers that converges for $x\in\mathbb C$, except the negative integers: $$\int_0^1\frac{1-t^\phi}{1-t}dt=\sum_{n=1}^\infty\left(\frac1n-\frac1{n+\phi}\right)$$ Any suggestions on things I could've improved and flaws in this would be nice. Edit: Using the second condition and $x=1$, we may have $$1=\exp\left[\int_0^1\left(f'(0)+\int_0^1\frac{1-t^\phi}{1-t}dt\right)d\phi\right]$$ $$\implies f'(0)=-\int_0^1\int_0^1\frac{1-t^\phi}{1-t}dt\ d\phi$$ $$f'(0)=-\gamma$$ where $\gamma$ is the Euler-mascheroni constant. Using this we get a new form of the gamma function(?): $$\boxed{x!=\exp\left[\int_0^x\left(-\gamma+\int_0^1\frac{1-t^\phi}{1-t}dt\right)d\phi\right]}\tag4$$ $$=\exp\left[\int_0^x\left(-\gamma+\sum_{n=1}^\infty\left(\frac1n-\frac1{n+\phi}\right)\right)d\phi\right]$$ I'm not sure how to deal with trivial manipulations of this expression, as surely someone is gonna say "hey, just multiply everything by $(1+\sin(2\pi x))$ and it will still satisfy the conditions, right?" But regardless, I think this is a pretty cool new gamma function? Also, references to this if it's not new. If someone could make a graph of this to look at, you would be great. REPLY [11 votes]: Integral of the Logarithmic Derivative of Gamma The logarithmic derivative of the Gamma function is the digamma function: $$ \frac{\Gamma'(x)}{\Gamma(x)}=\psi(x)=-\gamma+H_{x-1}\tag{1} $$ Therefore, $$ \begin{align} \log(\Gamma(x)) &=\int_1^x\left(-\gamma+H_{\phi-1}\right)\mathrm{d}\phi\\ &=\int_1^x\left(-\gamma+\int_0^1\frac{1-t^{\phi-1}}{1-t}\,\mathrm{d}t\right)\mathrm{d}\phi\\ &=\int_0^{x-1}\left(-\gamma+\int_0^1\frac{1-t^\phi}{1-t}\,\mathrm{d}t\right)\mathrm{d}\phi\tag{2} \end{align} $$ Verification of the Gamma Function The Bohr-Mollerup Theorem says that the Gamma function is uniquely determined as the log-convex function so that $\Gamma(1)=1$ and $\Gamma(x+1)=x\,\Gamma(x)$. We can verify these assuming only $H_x-H_{x-1}=\frac1x$ and $H'_x\ge0$. $\boldsymbol{\Gamma(1)=1}$: $$ \begin{align} \log(\Gamma(1)) &=\int_1^1\left(-\gamma+H_{\phi-1}\right)\mathrm{d}\phi\\ &=0\tag{3} \end{align} $$ $\boldsymbol{\Gamma(x+1)=x\,\Gamma(x)}$: Since $\lim\limits_{n\to\infty}\left(H_n-\log(n)\right)=\gamma$ and $H_n-\frac1n\le\int_{n-1}^nH_\phi\,\mathrm{d}\phi\le H_n$, $$ \begin{align} \log(\Gamma(x+1))-\log(\Gamma(x)) &=\int_x^{x+1}\left(-\gamma+H_{\phi-1}\right)\mathrm{d}\phi\\ &=-\gamma+\lim_{n\to\infty}\left(\int_x^nH_{\phi-1}\,\mathrm{d}\phi-\int_{x+1}^nH_{\phi-1}\,\mathrm{d}\phi\right)\\ &=-\gamma+\lim_{n\to\infty}\left(\int_x^nH_{\phi-1}\,\mathrm{d}\phi-\int_x^{n-1}H_\phi\,\mathrm{d}\phi\right)\\ &=-\gamma+\lim_{n\to\infty}\left(-\int_x^n\frac1\phi\,\mathrm{d}\phi+\int_{n-1}^nH_\phi\,\mathrm{d}\phi\right)\\[6pt] &=-\gamma+\log(x)+\lim_{n\to\infty}\left(-\log(n)+H_n\right)\\[8pt] &=\log(x)\tag{4} \end{align} $$ $\boldsymbol{\Gamma}$ is log-convex: $$ \begin{align} \frac{\mathrm{d}^2}{\mathrm{d}x^2}\log(\Gamma(x)) &=H'_{x-1}\\ &\ge0\tag{5} \end{align} $$ Verifying the Necessary Properties of the Extension $$ \begin{align} H_x-H_{x-1} &=\int_0^1\frac{1-t^x}{1-t}\,\mathrm{d}t-\int_0^1\frac{1-t^{x-1}}{1-t}\,\mathrm{d}t\\ &=\int_0^1t^{x-1}\,\mathrm{d}t\\ &=\frac1x\tag{6} \end{align} $$ $$ \begin{align} H'_x &=\int_0^1\frac{-\log(t)t^x}{1-t}\,\mathrm{d}t\\ &\ge0\tag{7} \end{align} $$ Limitations of the Extension One limitation of the extension $$ H_x=\int_0^1\frac{1-t^x}{1-t}\,\mathrm{d}t\tag{8} $$ is that it doesn't converge for $\mathrm{Re}(x)\le-1$. An extension of the Harmonic Numbers that works for all $x\in\mathbb{C}$ is $$ H_x=\sum_{k=1}^\infty\left(\frac1k-\frac1{k+x}\right)\tag{9} $$<|endoftext|> TITLE: Is there any non-abelian group with the property $AB=BA$? QUESTION [5 upvotes]: Is there any finite (resp. infinite) non-abelian group of order $\geq 8$ such that $AB=BA$ for all subsets $A, B$ with $|A|\geq 3$ and $|B|\geq 3$? ($AB=\{ab: a\in A, b\in B\}$) REPLY [6 votes]: Let $G$ be any nonabelian group and choose $a,b\in G$ are such that $ab\neq ba$. Choose distinct elements $c,d\in G\setminus\{a,bab^{-1}\}$, and choose distinct elements $e,f\in G\setminus\{b,a^{-1}ba,c^{-1}ba,d^{-1}ba\}$. This is possible since any nonabelian group has at least $6$ elements. Let $A=\{a,c,d\}$ and $B=\{b,e,f\}$. Our choice of $c,d,e,$ and $f$ exactly guarantees that $ba\not\in AB$ (we chose $c$ and $d$ such that $cb,db\neq ba$, and then we chose $e$ and $f$ such that $ae,ce,de,af,cf,df\neq ba$). Since $ba\in BA$, this means $AB\neq BA$. (More generally, we could similarly choose $A$ and $B$ to have $n$ elements as long as $|G|\geq 2n$.)<|endoftext|> TITLE: Parametric Equation for Rectangular Tubing with Corner Radius QUESTION [6 upvotes]: I'm working on a problem where I need the parametric equation for a complex shape. Parametric equation of a circle: x = a * cos θ y = a * sin θ Parametric equation of an ellipse: x = a * cos θ y = b * sin θ Parametric equation of a rectangle, reference: x = a * (|cos θ| * cos θ + |sin θ| * sin θ) y = b * (|cos θ| * cos θ - |sin θ| * sin θ) Parametric equation of a tubing with radius at the corners: ?? inputs = a, b and material thickness (i.e. corner radius = 2 * thickness) The shape I'm trying to model is hollow rectangular steel tubing. As manufactured, hollow steel tubing has a 'corner' radius = two times material thickness, per standard ASTM A500-10. Ultimately I'm working on a free tube notching calculator and pattern generator. I'd like to add a feature for rectangular tubing to rectangular tubing. I'm really relying on the parametric angular input to complete the model. I'm pulling out my hair trying to develop a functional parametric equation for the hollow steel outer shape (with radius at each corner) to use in my Descriptive Geometry analysis. Any recommendations on how to proceed here to obtain a parametrized form of a rectangle with sized corner radius? REPLY [9 votes]: Piecewise vs Parametric As discussed in the comments this can be done piecewise or parametrically. The difference is minor. Any pieces of a piecewise function can be joined into a single function by the following: $$f(t)=\begin{cases} f_1(t) & a_0 TITLE: Prove that $\frac{x^2y}{z}+\frac{y^2z}{x}+\frac{z^2x}{y} \geq x^2+y^2+z^2$ QUESTION [5 upvotes]: Let $x,y,z \in \mathbb{R}$ such that $x \geq y \geq z > 0$. Prove that $$\frac{x^2y}{z}+\frac{y^2z}{x}+\frac{z^2x}{y} \geq x^2+y^2+z^2.$$ I rearranged the inequality to get $$xy(x^2y)+yz(y^2z)+xz(z^2x) \geq xyz(x^2+y^2+z^2).$$ I then thought about using the rearrangement inequality but didn't see how to use it. How can we continue? REPLY [2 votes]: We need to prove that $\sum\limits_{cyc}(x^3y^2-x^3yz)\geq0$ or $$\sum\limits_{cyc}(z^3x^2+z^3y^2-2z^3xy)\geq\sum\limits_{cyc}(x^3z^2-x^3y^2)$$ or $$\sum\limits_{cyc}z^3(x-y)^2\geq(xy+xz+yz)(x-y)(y-z)(z-x)$$ which is obvious.<|endoftext|> TITLE: Find the minimum value of $a^2+b^2$ QUESTION [7 upvotes]: Let a and b be real numbers for which the equation $x^4 +ax^3 +bx^2 +ax+1=0 \tag1$ has at least one real solution. For all such pairs $(a,b)$, find the minimum value of $a^2 + b^2$. Using $x + \frac 1 x = y$ in (1): $y^2 + ay+b-2=0 \tag2$ therefore the first condition is $a^2 - 4b + 8\ge 0$. The second one, coming from $x^2 -yx + 1=0$, is $y^2 - 4 \ge0$. The calculus is a mess, so I don't think this is the way to solve it. Does anyone have a smarter idea? UPDATE I've corrected (2) following @mathlove suggestion. REPLY [3 votes]: Divide the equation by $x^2$ $$ \begin{align} 0 &=x^2+ax+b+\frac ax+\frac1{x^2}\\ &=\left(x+\frac1x\right)^2+a\left(x+\frac1x\right)+(b-2)\tag{1} \end{align} $$ Since $\left|\,x+\frac1x\,\right|\ge2$ and the solutions to $(1)$ are $$ x+\frac1x=\frac{-a\pm\sqrt{a^2-4b+8}}2\tag{2} $$ and since the sign of $a$ does not change $a^2+b^2$, and only changes the sign of $x$, we can assume that $a\ge0$. Then we need $$ a+\sqrt{a^2-4b+8}\ge4\tag{3} $$ Since $a=1$ and $b=0$ satisfy $(3)$, the minimum of $a^2+b^2$ is at most $1$. Therefore, we can assume that $a\le1$ and $b\ge-1$. Then $(3)$ becomes $$ a^2-4b+8\ge16-8a+a^2\iff a\ge\frac{b+2}2\tag{4} $$ Therefore, $$ a^2+b^2\ge\left(\frac{b+2}2\right)^2+b^2\ge\frac45\tag{5} $$ Since $a=\frac45$ and $b=-\frac25$ satisfies $(3)$ and $a^2+b^2=\frac45$, we get $$ \min\!\left(a^2+b^2\right)=\frac45\tag{6} $$<|endoftext|> TITLE: Abel differential equation with periodic coefficient QUESTION [5 upvotes]: Consider differential equation $$y'=a_3(x)y^3+a_2(x)y^2+a_1(x)y+a_0(x)$$ where $a_i(x)$ is continuous and periodic with period $2\pi$, i=0, 1, 2, 3. Assume that $a_3(x)\ge0$ and $a_3(x)$ is not equal to 0 for all x. Prove that the equation has at most three different periodic solutions with period $2\pi$. REPLY [2 votes]: Assume $y_1(x)>y_2(x)>y_3(x)>y_4(x)$ are four $2\pi-$ periodic solutions. Then we get $$ \frac{y_1'-y_2'}{y_1-y_2}-\frac{y_2'-y_3'}{y_2-y_3}=a_3(y_1-y_3)(y_1+y_2+y_3)+a_2(y_1-y_3)$$ Integrate both side from $0$ to $2\pi$, we get $$ 0=\int_0^{2\pi}\left[ a_3(y_1-y_3)(y_1+y_2+y_3)+a_2(y_1-y_3)\right] \mathrm{d}x$$ Similarly we can get $$ 0=\int_0^{2\pi}\left[ a_3(y_1-y_3)(y_1+y_4+y_3)+a_2(y_1-y_3)\right] \mathrm{d}x$$ Subtract two equations, we get $$ 0=\int_0^{2\pi} a_3(y_1-y_3)(y_2-y_4) \mathrm{d}x$$ It's clear that RHS is larger than $0$. Contradiction!<|endoftext|> TITLE: Are there non-abelian groups with the property $|AB|=|BA|$? QUESTION [7 upvotes]: Regarding to the question "Is there any non-abelian group with the property $AB=BA$?", now it is important for us to know that: (a) Is there any finite (resp. infinite) non-abelian group of order $\geq 8$ such that $|AB|=|BA|$ for all subsets $A, B$? (b) If the answer of (a) is positive, then is there any class of groups (e.g., solvable groups, free groups, CLT-groups, etc.) with the property? is it true for all groups with oreder $\leq 16$? ($AB=\{ab: a\in A, b\in B\}$, and $|.|$ denotes the cardinal number) REPLY [6 votes]: This answer is complete once one appeals to Derek's comment below. (Edited after that comment.) Suppose there are two elements $b, x \in G$ such that $b^{-1} x b \ne x, x^{-1}$. Consider $A = \{ 1, x^{-1} \}$, $B = \{ b, b x \}$. Then $A B = \{ b, b x, x^{-1} b, x^{-1} b x \}$ has four elements, while $B A = \{ b, b x^{-1}, b x, b \}$ has three. Assume thus that for all $b, x \in G$ we have $b^{-1} x b \in \{ x, x^{-1} \}$. Then $G$ is Hamiltonian. But the quaternion group $Q$ does not satisfy the assumption, as shown by Derek Holt in a comment below.<|endoftext|> TITLE: Problem 4.6, I. Martin Isaacs' Character Theory QUESTION [5 upvotes]: Let $n>0$ and assume that $\chi^{(n)} \in \mathrm{Irr}(G)$ for every $\chi \in \mathrm{Irr}(G)$. Show that $G = H \times A$, where $A$ is abelian and $(|H|, n) = 1$. $\chi^{(n)}$ is defined by $\chi^{(n)}(g) = \chi(g^n)$. $\mathrm{Irr}(G)$ is the set of all irreducible characters of $G$. Here are the hints in the book: Let $d = (|G|,n)$. Show that it is no loss to assume that $(|G|/d,n) = 1$. Let $A = \bigcap_{\chi \in \mathrm{Irr}(G)} \mathrm{ker} \chi^{(n)}$. Show that $A = \{g \in G | g^n=1\}$ and $|A| = d$. Let $H = \bigcap \{\mathrm{ker} \chi | \chi \in \mathrm{Irr}(G), \chi^{(n)} = 1_G\}$. Show $|G : H| = d$. Hint 1 and 2 are easy, and it is easy to deduce the result from the hints. As for hint 3, I have proven that $\{ g \in G | (o(g), n) = 1 \} \subseteq H$, and that $|G : H| \mid d$. And I am stuck here. Can anyone help me? REPLY [3 votes]: This seems difficult! I think an argument along the following lines might work. Unfortunately it involves induced characters, which are covered only in Chapter 5 of Isaacs' book, so it cannot be the intended solution. It is enough to prove that $|H \cap A|=1$. Since $(|A|,|G|/|A|)=1$, by the Schur-Zassnhaus Theorem $A$ has a complement $C$ in $G$, The irreducible characters $\chi^{(n)}$ with $\chi \in {\rm Irr}(G)$ have $A$ in their kernels, so they correspond to irreducible characters of $G/A$. Since $C \cong G/A$, the character $\chi^{(n)}_C$ corresponds to $\chi^{(n)}$ on $G/A$ and hence is an irreducible character of $C$. Consider the induced character $1_C^G$ and let $\chi$ be an irreducible constituent of it. Then by Frobenius reciprocity, $1_C$ is a constituent of $\chi_C$. Now, since $(|C|,n)=1$, for $\psi \in {\rm Irr}(C)$, we have $\psi^{(n)} \in {\rm Irr}(C)$, so $1_C$ is also a constituent of $\chi^{(n)}_C$. But $\chi^{(n)}_C$ is irreducible, so $\chi^{(n)}_C = 1_C$ and hence $\chi^{(n)} = 1_G$. Since no nontrivial element of $A$ is in the kernel of $1_C^G$, for each $1 \ne g \in A$, there is a constituent $\chi$ of $1_C^G$ with $g \not\in \ker \chi$, and hence $g \not\in H$, so $H \cap A = 1$ as claimed. (So in fact $C = H$.)<|endoftext|> TITLE: Other example of non continuous derivative QUESTION [5 upvotes]: I was trying to build an example of a function that is differentiable at $0$, and around $0$. But the derivative is not continuous at $0$ A family of functions that work is: (thank you Andrew D. Hwang for the general form) $$ f(x) = \left\{ \begin{array}{ll} x^{1+\epsilon}\psi(x^{-\alpha})) & \mbox{if } x\ne0 \\ 0 & \mbox{if x=0} \end{array} \right. $$ With $\psi$ a periodic and bounded function (or a modified trig function) and $\alpha>0,\epsilon>0$ Is there an example that does not belong to this family of functions? (I have found such examples, but I am not satisfied with them because of how I built them (they are not deeply different), so I'm still interested to get ideas!) REPLY [2 votes]: Suppose $f$ is differentiable on an open interval about $0$, and that $f'$ is discontinuous at $0$ (but continuous elsewhere, in the interest of delimiting the structure of "the simplest examples"). Consider the "lower" and "upper" limits of $f'$ at $0$: $$ L_{-} = \lim_{\delta \to 0^{+}} \inf_{0 < |x| < \delta} f'(x),\qquad L_{+} = \lim_{\delta \to 0^{+}} \sup_{0 < |x| < \delta} f'(x). $$ By Darboux's theorem, $\lim(f', 0)$ does not exist, so $L_{-} < L_{+}$ (strict inequality), and the interval $(L_{-}, L_{+})$ is "hit by" $f'$ infinitely many times in each neighborhood of $0$. Qualitatively, $f'$ oscillates infinitely many times (between $L_{-}$ and $L_{+}$) in every neighborhood of $0$. This doesn't mean that every such $f$ has the form $f(x) = x^{1 + \varepsilon} \psi(x^{-\alpha})$ with $\psi$ periodic, but does indicate why common counterexamples have this form.<|endoftext|> TITLE: Incorrect method to find a tilted asymptote QUESTION [7 upvotes]: Suppose I want to find the slanted asymptote for the graph of $\displaystyle y=\frac{x^2+x-6}{x+2}$. Using division, we have $\displaystyle y=x-1-\frac{4}{x+2};\;$ so $y=x-1$ is the slanted asymptote. I would like to find out, though, what is wrong with the following incorrect way of finding the asymptote: $\displaystyle y=\frac{x^2+x-6}{x+2}=\frac{x+1-\frac{6}{x}}{1+\frac{2}{x}}\approx\frac{x+1}{1}=x+1$, so $y=x+1$ is the slanted asymptote. REPLY [6 votes]: The problem with your second approach is that you've kept more precision than your approximation actually has — you can say $y \approx x$, but you don't have enough precision to clarify that more specifically to $y \approx x+1$ (or any other translate). In more detail, $$ \frac{1}{1 + \frac{2}{x}} \approx 1 - \frac{2}{x} $$ and consequently, $$ \frac{x+1-\frac{6}{x}}{1 + \frac{2}{x}} \approx \left( x+1-\frac{6}{x} \right) \left(1 - \frac{2}{x} \right) \approx x \cdot 1 + 1 \cdot 1 - x \cdot \frac{2}{x}$$ By neglecting the $\frac{2}{x}$ term of the denominator, you neglect the $x \cdot \frac{2}{x}$ term of this approximation — but that term is $-2$, so you're neglecting a nonnegligible quantity! Keeping the $\frac{2}{x}$ term around, the above approximation gives $x-1$, as desired. For more rigor, you can use big O notation: $$\frac{1}{1 + \frac{2}{x}} = 1 - \frac{2}{x} + O(x^{-2}) $$ $$ \frac{x+1-\frac{6}{x}}{1 + \frac{2}{x}} = \left( x+1+O(x^{-1}) \right) \left(1 - \frac{2}{x} + O(x^{-2}) \right) = x - 1 + O(x^{-1}) $$<|endoftext|> TITLE: Divisibility Proof $8\mid (x^2 - y^2)$ for $x$ and $y$ odd QUESTION [5 upvotes]: $x,y \in\Bbb Z$. Prove that if $x$ and $y$ are both odd, then $8\mid (x^2 - y^2)$. My Proof Starts: Assume $x$ and $y$ are both odd. So, $x = 2k + 1$ and $y = 2l +1$ for some integers $k$ and $l$. Thus, \begin{align} x^2 - y^2 &= (2k + 1)^2 - (2l + 1)^2 \\ &= 4k^2 + 4k + 1 - (4l^2 + 4l + 1) \\ &= 4k^2 + 4k - 4l^2 - 4l \end{align} My two concerns: 1) Is this correct so far? 2) How would I deal with the “$8\;\mid$” part? REPLY [8 votes]: All is correct; now the last expression can be written $$ 4\bigl(k(k+1)-l(l+1)\bigr) $$ and you just have to prove that $k(k+1)-l(l+1)$ is even. Hint: Can you tell whether $m(m+1)$ is even, for an integer $m$? REPLY [2 votes]: if $K$and $l$ are even, then $K=2K_1$ and $l=2K_2$ then $4K^2+4K-4l^2-4l=16K_1^2+8K_1-16K_2^2-8K_2$ which is clearly divisble by 8 now if $K$and $l$ are odd, then $K=2K_1+1$ and $l=2K_2+1$ then $4K^2+4K-4l^2-4l=4(4K_1^2+4K_1+1)+8K_1+4 -4(4K_2^2+4K_2+1)-8K_2+4= 16K_1^2+16K_1+8K_1+8-16K_2^2-16K_2-8K_2-8$ which is clearly divisble by 8 if $K$ is even and $l$ is odd or $K$ is odd and $l$ is even, it is the same calculation, try it!<|endoftext|> TITLE: Reference request: Choquet theory QUESTION [16 upvotes]: Recently I realized that many integral representation theorems (such as Herglotz' theorem, Bernstein's theorem, Riesz representation theorem, etc) may be systematically understood under the Choquet theory. I have never been explicitly exposed to this subject, however, thus I would like to have some good introductory material on it. Any reference that leads to Choquet theorem is fine, but it will be much nicer if it contains some criteria for uniqueness of representation (if any such thing exists) as well as application to some well-known theorems. Thank you for reading! REPLY [4 votes]: Besides Phelps' book, which offers a very well rounded introduction to the topic, as well as the finite dimensional motivation behind it, I strongly recommend two more: Alfsen E. M., Compact Convex Sets and Boundary Integrals (1971) and Lukes, J., Maly, J., Netuka I., Spurny J. Integral Representation Theory: Applications to Convexity, Banach Spaces and Potential Theory (2010). The second one, especially, is a class of its own. In its 700+ pages you can find a very modern treatise of Choquet's theory. For an introduction, I recommend reading Chapter 2 and then depending on your interests a selection of the rest of the topics of the book. In Chapter 14 you will find a lot of Applications, including the ones you mentioned in the original post. Keep in mind that Choquet's Theory is demanding, so you already need to be familiar with some advanced topics from Functional Analysis and Measure Theory. Lukes' et al book contains a very detailed appendix which makes the book as self contained as it can get. Additionally, it provides references to most topics it didn't cover, which is a huge plus if you want to continue to something more specialized. I've personally used all three of them during a couple of projects, having no prior knowledge of Choquet Theory, and they all helped me a lot.<|endoftext|> TITLE: Proof general state space similarity transformation to controllable canonical form QUESTION [6 upvotes]: Given a state space model of the form, $$ \begin{align} \dot{x} &= A\,x + B\,u \\ y &= C\,x + D\,u \end{align} \tag{1} $$ however I think that this would also apply to a discrete time model. Assuming that this state space model is controllable, I would like to find a nonsingular similarity transform $z=T\,x$, which would transform the state space to the following model, $$ \begin{align} \dot{z} &= \underbrace{T\,A\,T^{-1}}_{\bar{A}}\,z + \underbrace{T\,B}_{\bar{B}}\,u \\ y &= \underbrace{C\,T^{-1}}_{\bar{C}}\,z + \underbrace{D}_{\bar{D}}\,u \end{align} \tag{2} $$ such that it is in the controllable canonical form with, $$ \bar{A} = \begin{bmatrix} 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & 1 \\ -a_n & -a_{n-1} & \cdots & -a_2 & -a_1 \end{bmatrix} \tag{3a} $$ $$ \bar{B} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} \tag{3b} $$ When $A$ is in the Jordan canonical form, with Jordan blocks of at most size one by one (so no of diagonal terms), $$ A = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{bmatrix} \tag{4} $$ with at most algebraic multiplicity of one. The states of matrix $(3a)$ can be seen as integrals of the next state and the last state a linear combination of the previous ones, therefore it can be shown that similarity transforms of the form, $$ T = \left[\begin{array}{c c} \alpha_1 \begin{pmatrix} 1 \\ \lambda_1 \\ \lambda_1^2 \\ \vdots \\ \lambda_1^{n-1} \end{pmatrix} & \alpha_2 \begin{pmatrix} 1 \\ \lambda_2 \\ \lambda_2^2 \\ \vdots \\ \lambda_2^{n-1} \end{pmatrix} & \cdots & \alpha_n \begin{pmatrix} 1 \\ \lambda_n \\ \lambda_n^2 \\ \vdots \\ \lambda_n^{n-1} \end{pmatrix} \end{array}\right] \tag{5} $$ would bring $(4)$ to $(3a)$. The values for $\alpha_i$ can be solved for using $\bar{B}=T\,B$ and $(3b)$, when defining $B$ as, $$ B = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix} \tag{6} $$ then this equality can be written as, $$ \begin{bmatrix} b_1 & b_2 & \cdots & b_n \\ \lambda_1\,b_1 & \lambda_2\,b_2 & \cdots & \lambda_n\,b_n \\ \lambda_1^2\,b_1 & \lambda_2^2\,b_2 & \cdots & \lambda_n^2\,b_n \\ \vdots & \vdots & \cdots & \vdots \\ \lambda_1^{n-1}\,b_1 & \lambda_2^{n-1}\,b_2 & \cdots & \lambda_n^{n-1}\,b_n \end{bmatrix} \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{bmatrix} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} \tag{7} $$ It can be noted that in this case the matrix in equation $(7)$ is the same as the transpose of the controllability matrix, $$ \mathcal{C} = \begin{bmatrix}B & A\,B & A^2B & \cdots & A^{n-1}B\end{bmatrix} \tag{8} $$ so the solution to equation $(7)$ can also be written as, $$ \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{bmatrix} = \mathcal{C}^{-T} \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} \tag{9a} $$ $$ \vec{\alpha} = \mathcal{C}^{-T} \bar{B} \tag{9b} $$ The transpose of $T$ can, similar to equation $(7)$, also be written as, $$ T^T = \begin{bmatrix}\vec{\alpha} & A\,\vec{\alpha} & A^2\vec{\alpha} & \cdots & A^{n-1}\vec{\alpha}\end{bmatrix} \tag{10} $$ or if define a new vector $\vec{v}$ as the transpose of $\vec{\alpha}$ and substitute $\vec{\alpha}$ for the right hand side of equation $(9b)$, $$ \vec{v} = \begin{bmatrix}0 & \cdots & 0 & 1\end{bmatrix} \mathcal{C}^{-1} \tag{11a} $$ $$ T = \begin{bmatrix} \vec{v} \\ \vec{v}\, A \\ \vec{v}\, A^2 \\ \vdots \\ \vec{v}\, A^{n-1} \end{bmatrix} \tag{11b} $$ From this expression it can also be seen that if $\mathcal{C}$ is not full-rank, then such a transformation would not exist. After some testing it seems that this expression also seem to hold for any $A$ and $B$, also long as $\mathcal{C}$ is full-rank/invertible, but in that case equation $(10)$ should contain $A^T$ instead of $A$ (but when using equation $(4)$, then $A=A^T$). However I do not know how I could go about proving that this is always the case. Also a small side question: How could one define this transformation when $B$ is of size $n$ by $m$, with $m>1$? I suspect that in the controllable canonical form $\bar{B}$ should be of the form, $$ \bar{B} = \begin{bmatrix} 0 & \cdots & 0 \\ \vdots & \cdots & \vdots \\ 0 & \cdots & 0 \\ 1 & \cdots & 1 \end{bmatrix} \tag{12} $$ REPLY [3 votes]: For a single-input system the transformation that yields the controller canonical form is $$T=\left[\matrix{q\\qA\\ \vdots\\qA^{n-1}}\right]$$ where $q$ is the last row of the controllability matrix inverse i.e. $$\mathcal{C}^{-1}=\left[\matrix{X\\ \hline q}\right]$$ This property ensures that $$qA^{i-1}b=\begin{cases}0,\quad i=1,\cdots,n-1\\ 1,\quad i=n \end{cases}$$ which can be used along with the Cayley-Hamilton theorem to prove that $$Tb=\left[\matrix{qb \\ \vdots \\ qA^{n-2}b \\qA^{n-1}b}\right]=\left[\matrix{0 \\ \vdots \\ 0 \\1}\right]=\bar{B}$$ $$TA=\left[\matrix{qA \\ \vdots \\ qA^{n-1} \\qA^{n}}\right]=\left[\matrix{qA \\ \vdots \\ qA^{n-1} \\-q\sum_{i=1}^{n}a_{n-i+1}A^{i-1}}\right]=\left[\matrix{0 & 1 & 0& \cdots & 0\\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 1\\-a_n & -a_{n-1} & -a_{n-2}& \cdots & -a_1}\right]\left[\matrix{q \\ qA\\ \vdots \\ qA^{n-2} \\qA^{n-1}}\right]=\bar{A}T$$ For the multiple input case $B\in\mathbb{R}^{n\times m}$ the situation is more complex. The calculation involves the so called controllability indices $\mu_1,\mu_2,\cdots,\mu_m$ and $\bar{B}$ is of the form $$\bar{B}=\left[\matrix{0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 0\\ 1 & * & * & \cdots & *\\\hline 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 0\\ 0 & 1 & * & \cdots & *\\\hline \vdots & \vdots & \vdots & \ddots & \vdots\\\hline 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 0\\ 0 & 0 & 0 & \cdots & 1}\right] $$ where $*$ denotes a not necessarily zero element. The $m$ nonzero rows of $\bar{B}$ are the $\mu_1,\mu_1+\mu_2,\cdots,\mu_1+\mu_2+\cdots+\mu_m$ rows. For more details I suggest you to consult the book Antsaklis and Michel, "A linear systems primer" .<|endoftext|> TITLE: Showing existence of a subsequence QUESTION [5 upvotes]: The question is one from the previous analysis preliminary exam: Let $(M, d)$ be a compact metric space and $z ∈ M$. Let $T : M → M $ be a function which satisfies $$ d(x, y) ≤ d(T(x), T(y))$$ for all $x, y \in M,$ i.e. the distances are non-decreasing under the mapping T. Define {$x_n$} by $x_1 = T(z)$ and $ x_{n+1} = T(x_n)$ for $n ≥ 1.$ Prove that there exists a sub sequence of {$x_n$} which converges to $z$. I saw some parallels between this question and to show that an isomtery on a compact set to itself is a surjection So I assumed that there is no subsequence which converges to $z$ and therefor there exist $n_o \in \Bbb N $ and $\epsilon$ such that $d(x_m,z) >\epsilon $ for each $ m>n_{0}$ And can I say that the sequence therefore has no convergent subsequnece? and I get $\epsilon< d(x_{m-n},z)=d(z,T^{m-n}(z))≤d(x_n, x_m) $ whenever $m-n>n_{0}$ However i am not totally convinced if I have done everything correctly and also that if the result follows from string of inequalities necessarily . Any help would be appreciated REPLY [3 votes]: The sequence $(x_n)$ has a converging subsequence $x_{n_k}\rightarrow p$. The subsequence is then a Cauchy sequence and $$d(x_{n_{k+1}-n_k},z)\leq d(x_{n_{k+1}},x_{n_k}) \rightarrow 0$$ One may note that if the difference $n_{k+1}-n_k$ stays bounded or, more generally, has a finite accumulation point $m\geq 1$ then $T^m z=z$ is a periodic point, so $x_{km}=z$ for all $k\geq 1$.<|endoftext|> TITLE: Riemann Sums - Prove $\lim_{n\to\infty} \frac{1}{n} \sqrt[n]{n!} = 1$ QUESTION [5 upvotes]: This is about a homework I have to do. I don't want the straight answer, just the hint that may help me start on this. To give you context, we worked on series, and we're now studying integrals, linking the two with Riemann sums. Now here is the question : Using Riemman sum, prove : $$ \lim_{n\to\infty} \frac{1}{n} \sqrt[n]{n!} = 1 $$ Which is to say, find $a,b\in \mathbb{R}$, $f$ a function, so that : $$ \frac{1}{n} \sqrt[n]{n!} = \left(b-a\over n\right)\sum_{k=1}^n f\left(a+k.{b-a\over n}\right) $$ Any help would be greatly appreciated, thanks. EDIT : Thanks to the answers and after looking at it for a while, it appears the limit is $1\over e$ instead of $1$ REPLY [6 votes]: $$ A=\frac{1}{n} \sqrt[n]{n!} \\= \sqrt[n]{\frac{n!}{n^n}}\\\sqrt[n]{\frac{n(n-1)(n-2)...3.2.1}{n.n.n.n.n...n}}\\$$ now take log $$\ln A=\ln \sqrt[n]{\frac{n(n-1)(n-2)...3.2.1}{n.n.n.n.n...n}}\\=\frac{1}{n}\ln(\frac{n(n-1)(n-2)...3.2.1}{n.n.n.n.n...n})=\\\frac{1}{n}\ln(\frac{1.2.3...(n-3)(n-2)(n-1)n}{n.n.n.n.n...n})=\\ \frac{1}{n}(\ln\frac{1}{n}+\ln\frac{2}{n}+\ln\frac{3}{n}+...+\ln\frac{n}{n})=\\\frac{1}{n} \Sigma_{i=1}^{n}\ln(\frac{i}{n})$$ and now $$\lim \ln A=\int_{0}^{1}\ln(x)dx=x\ ln x-x =0-1-0-0\\ \ln A =-1 \to A=\frac{1}{e}$$<|endoftext|> TITLE: Derivatives of the Riemann zeta function at $s = 1/2$ QUESTION [9 upvotes]: The Wolfram page http://mathworld.wolfram.com/RiemannZetaFunction.html states that "Derivatives $\zeta^{(n)}(1/2)$ can also be given in closed form", but apart from an explicit formula for $\zeta'(1/2)$ provides neither any such formula, nor any reference. Can anyone point me to an appropiate reference? REPLY [6 votes]: By the reflection formula: $$ \Gamma\left(\tfrac{s}{2}\right)\pi^{-s/2}\zeta(s) = \Gamma\left(\tfrac{1-s}{2}\right)\pi^{(s-1)/2}\zeta(1-s) $$ and by considering $\frac{d}{ds}\log(\cdot)$ of both sides $$ \tfrac{1}{2}\psi\left(\tfrac{s}{2}\right)-\tfrac{\log\pi}{2}+\tfrac{\zeta'}{\zeta}(s) = -\tfrac{1}{2}\psi\left(\tfrac{1-s}{2}\right)+\tfrac{\log\pi}{2}-\tfrac{\zeta'}{\zeta}(1-s) $$ or $$\begin{eqnarray*} \tfrac{\zeta'}{\zeta}(s)+\tfrac{\zeta'}{\zeta}(1-s)&=&\log\pi-\tfrac{1}{2}\left[\psi\left(\tfrac{s}{2}\right)+\psi\left(\tfrac{1-s}{2}\right)\right]\\&=&\log\pi+\gamma+\sum_{n\geq 1}\left[\frac{1}{2n-1-s}+\frac{1}{2n-2+s}-\frac{1}{n}\right].\tag{A}\end{eqnarray*} $$ By evaluating both sides of $(A)$ at $s=\frac{1}{2}$ we get: $$ 2\cdot\tfrac{\zeta'}{\zeta}\left(\tfrac{1}{2}\right) = \log \pi+\gamma+\tfrac{\pi}{2}+3\log 2 \tag{B}$$ which agrees with the result $(41)$ in the mentioned MathWorld page. There is no hope of computing $\zeta''\left(\frac{1}{2}\right)$ from the reflection formula (if we have a symmetric function with respect to $s=\frac{1}{2}$, its even derivatives at such point equal zero), but by considering the Weierstrass product for the $\zeta$ function we have $$ \frac{d^2}{ds^2}\log\zeta(s) = \frac{1}{(s-1)^2}-\sum_{n\geq 1}\frac{1}{(2n+s)^2}-\sum_{\rho}\frac{1}{(s-\rho)^2}\tag{C} $$ hence $$ \tfrac{\zeta''}{\zeta}\left(\tfrac{1}{2}\right) = \tfrac{1}{4}\left(\log \pi+\gamma+\tfrac{\pi}{2}+3\log 2\right)^2+8-2G-\tfrac{\pi^2}{4}-\sum_{\rho}\frac{1}{\left(\tfrac{1}{2}-\rho\right)^2}\tag{D}$$ In general $\zeta^{(k)}\left(\frac{1}{2}\right)$ can be computed by differentiating $(A)$ or $(C)$ an even number of times, then evaluating at $s=\frac{1}{2}$. We may also notice that $$ \sum_{\rho}\frac{1}{\left(\frac{1}{2}-\rho\right)^{2m}}\stackrel{\text{Cauchy}}{=}\lim_{T\to +\infty}\frac{1}{2\pi i}\oint_{R_T}\frac{\zeta'(s)}{\zeta(s)}\left(s-\tfrac{1}{2}\right)^{-2m}\,ds, \tag{E}$$ where $R_T$ is the rectangle having vertices at $-Ti,1-Ti,1+Ti,Ti$ (with an indentation avoiding the pole at $s=1$), can be simply estimated by summation by parts and the Riemann-Von Mangoldt theorem. The non trivial-zeros of $\zeta$ have a distance from $\frac{1}{2}$ which is always $\geq 14$, hence the residual series provide a negligible contribution for large and even values of $k=2m$.<|endoftext|> TITLE: Constructing a planar shape with area $\pi^2$ QUESTION [8 upvotes]: Is there a planar shape which can be defined using only algebraic numbers and basic shapes (line, polygon, circle, parabola, etc.), which has an area of $\pi^2$? (I know this isn't a well-posed question, but what I'm interested in is whether there is any "simple" geometric construction which has an area of $\pi^2$. You may restate the problem if you think there is a better way of phrasing it). (Also, I would rather that the shape be constructed in a finite number of steps, so no limits...) REPLY [2 votes]: By wrapping a thread around a circle, it's possible to mark off a segment of length $\pi$ as the circumference of a circle of unit diameter, then to construct a square of side $\pi$. For example: Similarly, using an Archimedean spiral $r = b\theta$, we can mark off a segment of length $2\pi b$ (the radial separation of successive windings), and then construct a square of side $\pi$ (by taking $b = \frac{1}{2}$, for example). Along different lines, the region $a_{1} + b\theta \leq r \leq a_{2} + b\theta$ for $0 \leq \theta \leq 2\pi$ (bounded by Archimedean spirals and segments whose endpoints are intersections of the spirals with the positive $x$-axis) has area \begin{align*} \frac{1}{2} \int_{0}^{2\pi} \bigl[(a_{2} + b\theta)^{2} - (a_{1} + b\theta)^{2}\bigr]\, d\theta &= \frac{1}{2} \int_{0}^{2\pi} (a_{2} - a_{1})(a_{2} + a_{1} + 2b\theta)\, d\theta \\ &= (a_{2} - a_{1}) \bigl[\pi(a_{2} + a_{1}) + 2b\pi^{2}\bigr]. \end{align*} Taking $a_{1} = 0$, $a_{2} = 1$, and $b = \frac{1}{2}$, for example, gives a region of area $\pi + \pi^{2}$. The excess can be whittled away by removing disks, say, nine disks of radius $\frac{1}{3}$ centered on the the intersections of the Archimedean spiral $r = \frac{1}{2}(1 + \theta)$ with the rays making (constructible!) angles $\frac{\pi}{10}(2i+1)$, for $1 \leq i \leq 9$:<|endoftext|> TITLE: Double absolute value proof: $||a|-|b||\le |a-b|$ QUESTION [10 upvotes]: I guess, I know how to solve inequalities with absolute value, but I have problems with this one. $||a|-|b||\le |a-b|$ $a,b\in \mathbb{R}$ I tried to solve the inequation like this: case1: $a>0$ case2: $a<0$ I started with case 1. than we have two possibilities $b<0$ and $b>0$ Firstly I took $b<0$ and I had another two possibilities $|a+b|>0$ $|a+b|<0$ I take $|a+b|>0$ and in this case I had two possibilites according to to right side of the inequation $|a-b|>0 ........... a+b\le a-b ......b\le-b$ $|a-b|<0..........a+b\le -a+b .......a\le -a$ I can do the same with other possibilites, but how do I know, if my solution (or even method) is right? there are so many conditions, that I am lost in them. Thank you for your time. REPLY [3 votes]: Instead of considering signs of $a$ and $b$ separately, you can consider their relative signs: if they have the same sign (or one is zero), then the two sides are equal, while if they have different (nonzero) signs, the right hand side is bigger (equal to sum of the absolute values). If this is not obvious to you, then you can simplify a bit by assuming that $a$ is nonnegative: you can do that because flipping signs of $a$ and $b$ both does not change either side.<|endoftext|> TITLE: How fast do iterated exponentiation converge? QUESTION [5 upvotes]: Iterated exponentiation is defined by $$x \mapsto x^{x^{x^{\cdot^{\cdot^{\cdot}}}}}$$ or more conveniently, we denote by $^rx$ the expression $\underbrace{x^{x^{\cdot^{\cdot^{\cdot^{x}}}}}}_{r \text{ times}}$. Let us define the function $$\begin{array}{cccc} f_x:& \mathbb{N}& \to & \mathbb{R}\\ & r & \mapsto & ^rx. \end{array} $$ Euler proved that $\lim_{r \to \infty} f_x(r)$ exists for real numbers $x \in [e^{-e}, e^{1/e}]$ ([1]) (there is a convergence result in $\mathbb{C}$, but I am only interested in real numbers here). Now we consider the Lambert $W$ function, which is defined to be the function satisfying $$W(z)e^{W(z)} = z.$$ Then we know from [1] that when $f_x$ converges, it converges to $$\lim_{r \to \infty} f_x(r) = \frac{W(-\ln(x))}{-\ln(x)}.$$ My question is: Given $\epsilon > 0$, how can we determine $r_0 \in \mathbb{N}$ such that $\left|f_x(r_0) - \frac{W(-\ln(x))}{-\ln(x)} \right| < \epsilon$? Is it possible to solve this algebraically, or is it only possible to do it numerically? REPLY [3 votes]: In principle, for this can the Abel-function or the Schröder-function be used. (In the following I change your notation to mine: $f_b(x)=b^x $ for the iterable base-function, $f^{oh}_b(x)$ for the $h$-times iterated function itself ($h$ is often called the "iteration-height). The Schröder-function, say $\sigma(x)$, provides somehow an analogon to the $\operatorname{slog}(x)$ function, where the $\operatorname{slog}(x)$ gives directly the required iteration height to arrive from $x_0=1$ at $x_h$ by $f_b^{oh}(x_0) = x_h$ and so simply interpretes some given value $x$ as h'th iterate away from $x_0=1$, in short : interpretes any (suitable) given $x$ as $x_h$ and allows to compute the required $h$. The Schröder function gives a similar model, it allows the construction $$ f_b^{oh}(x_0) = \sigma^{-1}(c^h \sigma(x_0) = x_h \tag 1$$ where $c$ is some constant depending on $b$. Let's denote the fixpoint, which is the limit $x_\infty$ by $\omega$ and $x_0$ and the $\varepsilon$ larger than $\omega$, then the/your question is: $\qquad \qquad $ for which $h$ is $ \qquad f_b^{oh}(x_0) = x_h = \omega + \varepsilon \qquad \qquad$ ? The Schröder-function requires now, that the involved power series are recentered around the fixpoint $\omega$. With that measure this can be computed by $$\begin{array} {rll} \sigma^{-1}(c^h \cdot \sigma(x_0-\omega) ) +\omega &=& x_h \\ c^h \cdot \sigma(x_0-\omega) &=& \sigma(x_h-\omega) \\ c^h &=& w= \sigma(x_h-\omega) / \sigma(x_0-\omega) \\ h &=& \log_c(w) \end{array}$$ Unfortunately, the $\sigma(x)$ and $\sigma(x-\omega)$ are not known to be expressible in simple forms, and also the coefficients of their powerseries have no simple general form, so not only the result of the powerseries $\sigma()$ is an approximation but even its coefficients are (in general). The Schröder-function is usually estimated by a limit expression (which I shall not show here). In the tetration-forum there are a multitude of discussions how to compute the Abel function (representing the $\operatorname{slog}()$-concept) or the Schröder- (also denoted as Koenig's) function; an -in my view- conceptionally simple method for the construction/approximation of the power series for the Schröder-function is to use finite $n\times n$-"Carleman"-matrices where $n$ can be increased to get better approximations. Carleman-matrices for some function $f(x)$ contain basically the coefficients of the power series of that $f(x)$ and as well of all its powers: $f(x)^c$ where $c$ as power for the $f(x)$ is here identical with the column-index. Denoting an infinite "Vandermonde"-vector $V(x)$ for $V(x) = \operatorname{rowvector}[1,x,x^2,x^3,x^4,...]$ we'll have with the Carlemanmatrix $F$ for the function $f(x)$ the form $$ V(x) \cdot F = V(f(x)) \\ V(f(x)) \cdot F = V(f^{o2}(x)) \\ \vdots \\ V(x) \cdot F^h =V(f^{oh}(x)) \tag 2$$ and fractional iteration (for the convergent cases) is then approximable using the diagonalization to a diagonal matrix $D$ $$ F = M \cdot D \cdot W \qquad \qquad \text{where } W=M^{-1} \tag 3$$ (I like to use in that formulae $W$ instead of the LaTex-difficult $M^{-1}$) Then powers of $F$ can exactly be expressed by powers of the diagonal elements $d_{k,k}$ of $D$ only, which are scalar and, as far as being positive, admit any fractional power $d_{k,k}^h$. Of course, using that diagonalization to find the Schröder-function, we need $F$ for $f(x+\omega)-\omega$ With this, the Schöder-function is conceptually $$ V(x_0-\omega) \cdot M = V( \sigma(x_0-\omega)) \tag 4$$ For truncated matrices $F$ (and only such we can actually use) the result on the rhs is not really of Vandermonde-form, (but the diagonal in $D$ is of that form) and we can approximate $$ V(x_0-\omega) \cdot M = Y_0\\ V(x_h-\omega) \cdot M = Y_h\\ $$ and find h for the required $D^h$ - such that $ Y_0 \cdot D^h = Y_h$ - based on evaluation of the entries in $Y_0$ and $Y_h$ For bases $b$ in the range $1 \lt b \lt \exp(\exp(-1)) \approx 1.44$ (with sufficient distance to that borders) and $32\times 32$-matrices we get approximations which can surely be used to, say, 8 digits precision, and there are many alternative procedures discussed in the tetration-forum to improve that approximation to many more digits and to allow bases nearer to the borders of the range. So focusing on your question again: we do not have an "exact" expression for the required iteration-height ( "how far is $\omega + \varepsilon$ from $x_0$ in terms of the iteration-height $h$?" ), but the concept of $\operatorname{slog}()$ (which implements actually the so-called "Abel"-function) and of the Schröder-function give at least ideally a "closed-form" as an evaluatable power series (like that for $\exp(x)$) but there's not yet an easy relation to common "closed" expression like $\sqrt[n]{x}, \log(x),\exp(x)$ A short view into an actual example, using Pari/GP. I use the base for exponentiation $b=\sqrt{2}$ which is nicely in the range $1\lt b\lt e^{1/e}$ . The attracting fixpoint is $x_\infty = t=2$ such that $b^t=t$. To get the recentered Schröderfunction I define first the recentered and rescaled function $$g_t(x)=f_b(x+t) -t = (2^{x/2}-1)\cdot 2 $$ From this I get the Carlemanmatrix G which is triangular and the top-left segment looks like 1 . . . . . 0 0.693147 . . . . 0 0.120113 0.480453 . . . = G 0 0.0138760 0.166512 0.333025 . . size 6x6 shown 0 0.00120227 0.0336635 0.173126 0.230835 . 0 0.0000833347 0.00500008 0.0500008 0.160003 0.160003 Here we see the coefficients for $\small (2^{x/2}-1)\cdot 2 = 0.693147 x + 0.120113 x^2 + 0.0138760 x^3 + 0.00120227 x^4 + ... $ in the second column. The diagonalization of G using Pari/GP M=mateigen(G);W=M^-1;D=W*G*M would give some matrix M where the norming of the columns is unfortunate; to have them a Carlemanmatrix again, it is required, that the diagonal has only ones. Norming M to this form(and W accordingly) I get M D W 1 . . . . . | 1 | 1 . . . . . | . 1 . . . . | 0.693147 | . 1 . . . . | . 0.564723 1 . . . | 0.480453 | . -0.564723 1 . . . | . 0.299646 1.12945 1 . . | 0.333025 | . 0.338178 -1.12945 1 . . | . 0.155932 0.918204 1.69417 1 . | 0.230835 | . -0.210331 0.995267 -1.69417 1 . | . 0.0803519 0.650299 1.85567 2.25889 1 | 0.160003 | . 0.134455 -0.802616 1.97127 -2.25889 1 | - - - - - - + - + - - - - - - + We can see, that the diagonal entries $d_{k,k}$ of D are the consecutive powers of $\log(t)$ The coefficients of the powerseries for the Schröder-function are now in the second column of M so we have $$ \small \sigma(x) = x + 0.564723 x^2 + 0.299646 x^3 + 0.155932 x^4 + ...$$ Having this, we try an example. We set $x_0$ not too far from the fixpoint, so that $\sigma(x_0-t) $ converges, for instance $x_0= 2.3$. Next we choose some small eps, say $\varepsilon = 0.001$ such that the critical $x_h = t+\varepsilon=2.001$. We get $\sigma_0 = \sigma(x_0-t) \approx 0.360409 $ and $\sigma_h = \sigma(t+\varepsilon-t) = \sigma(\varepsilon) \approx 0.00100057 $ The required iteration height $h$ to arrive from $x_0$ at $x_h$ should now become $$ h= \log(\sigma_h / \sigma_0) / \log(\log(t))) \approx 16.0613 $$ and indeed get by the follwing check: x1=x0;for(k=1,16,x1=b^x1);x1 \\ iterate x0 16 time towards the fixpoint %227 = 2.00102 tt0+eps \\ the epsilon above the fixpoint %228 = 2.00100 x2=x0;for(k=1,17,x2=b^x2);x2 \\ iterate x0 17 time towards the fixpoint %229 = 2.00071 and we find, that the $\varepsilon$-point is between 16 and 17 iterations away from $x_0$. A general remark: There is not yet an agreed solution for tetration in general. To show how there are different results for different methods of computing $x_h$ I've made a small essay for the discussion in our "tetrationforum". I used the base $b_4$ which is outside the range $1..\exp(\exp(-1))$ and $x_{-\infty}$ as fixpoint is complex and made pictures for that (few selected) methods making huge differences visible. The short presentation is here<|endoftext|> TITLE: $A \subset \mathbb{R}$ such that $A$ is homeomorphic to $\mathbb{R} \setminus A$ QUESTION [6 upvotes]: For 2 topological spaces $(X,T_X)$ and $(Y,T_Y)$, I write $X \simeq Y$ if $X$ and $Y$ are homeomorphic. If $A \subset X$, I always endow $A$ with the subspace topology. I wonder if it is true that : $\exists A \subset \mathbb{R}$ such that $A \simeq \mathbb{R} \setminus A$. It doesn't hold for connected sets (i.e intervals, either because $\mathbb{R} \setminus A$ would not be connected or one of $A$ or $\mathbb{R} \setminus A$ would not be connected after removing a point whereas the other one may still be), nor for compact sets (as $\mathbb{R}$ would be the union of two compact sets, so compact), nor for countable or co-countable sets (as $A$ and $\mathbb{R} \setminus A$ would have different cardinalities). If this is true, I wonder if moreover : $\exists A \subset \mathbb{R}$ open such that $A \simeq \mathbb{R} \setminus A$. Thanks. REPLY [4 votes]: Noah answered the first question. For the second question, the answer is no. Because if $A$ were homeomorphic to $\mathbb{R} \backslash A$ with homeomorphism being $f$, it would follow that $A \to \mathbb{R} \backslash A \hookrightarrow \mathbb{R}$ is an open map (invariance of domain), hence $\mathbb{R} \backslash A$ would be open in $\mathbb{R}$. Hence, it should be $\mathbb{R}$ (since it is also closed, being the complement of an open set, and clearly cannot be the empty set). It follows that $A$ is empty, an absurd.<|endoftext|> TITLE: Is an expanding map on a compact metric space continuous? QUESTION [10 upvotes]: I got inspired by this question Existence of convergent subsequence to think about the following problem: Suppose you have a compact metric space $(X,d)$ and an expanding map $T:X\rightarrow X$, i.e. $d(Tx,Ty)\geq d(x,y)$ for every $x,y\in X$. Is the map $T$ then continuous? Without assuming continuity we already know that $T^n(X)$ must be dense in $X$ for any $n\geq 1$, since given $z\in X$ the orbit of $z$ must accumulate upon $z$. But showing continuity has escaped me so far. If there are counter-examples using the Axiom of Choice that also interests me. REPLY [9 votes]: Such a map $T$ must in fact be an isometry. Indeed, given $x,y\in X$, let $x_n=T^n(x)$ and $y_n=T^n(y)$. By compactness, we can find $n_k$ such that both $(x_{n_k})$ and $(y_{n_k})$ converge, and by passing to a further subsequence we may assume $n_{k+1}-n_k$ is increasing. Setting $m_k=n_{k+1}-n_k$, we then find as in your answer to the linked question that $(x_{m_k})$ converges to $x$ and $(y_{m_k})$ converges to $y$. In particular, $d(x_{m_k},y_{m_k})$ must converge to $d(x,y)$. But as long as $m_k>0$, $d(x_{m_k},y_{m_k})\geq d(T(x),T(y))$. It follows that we must have $d(T(x),T(y))\leq d(x,y)$. (By compactness of $X$ and your observation that $T(X)$ is dense, it follows that $T$ is also surjective.)<|endoftext|> TITLE: Approximating the log of the Modified Bessel Function of the Second Kind QUESTION [8 upvotes]: I'm attempting to find a precise as possible approximation to the logarithm of the Modified Bessel Function of the Second Kind: $$\log K_{\alpha}(x) = \log\Big[\frac{1}{2}\int_0^{∞} t^{\alpha-1} \exp\{-\frac{1}{2}x(t+t^{-1})\}dt\Big].$$ The problem is that $K_{\alpha}(x)$ diverges both as $\alpha\rightarrow\infty$ and as $x\rightarrow0$, which is why I'm taking the logarithm in the first place. But of course the log trick only works if I can use it to break up the expression in some way, which I can't do in this case because I can't pass it through the integral. There is an asymptotic approximation for large $\alpha$, given by: $$K_{\alpha}(x)\sim\sqrt{\frac{\pi}{2\alpha}}\Big(\frac{ex}{2\alpha}\Big)^{-\alpha}.$$ Clearly this approximation can be broken up by the log, unfortunately it is not sufficiently precise (for my purposes) for values of $\alpha$ and $x$ which cause overflow. Does anyone have another way to get a more precise estimate for $\log K_{\alpha}(x)$? If you want you can assume that $\alpha=k-\frac{1}{2}$ for $k\in{\bf Z}_+$. REPLY [6 votes]: I am currently struggling with the same problem, getting a bit further but not all the way. If you solved it in the meantime, I'd be glad to hear about it. EDIT 2: Improved answer some more as my technique improved some more. EDIT 3: The pull request (linked below) now has somewhat better implementation then discussed here that reduces the error a bit further, but still breaks for some values. C++ code (written for Stan math library) can be found at https://github.com/stan-dev/math/pull/1121 Case 0: $x$ small relative to $\alpha$ I use this when $\alpha > 15$ AND $10x < \sqrt{\alpha + 1}$. The formula can be found on Wikipedia on Bessel K and also as formula 1.10 of Temme, Journal of Computational Physics, vol 19, 324 (1975). It has: $$ K_\alpha(x) \simeq \frac{\Gamma(\alpha)}{2} \left( \frac{2}{x} \right)^\alpha $$ Case 1: Small to medium $\alpha$ and $x$ My main workhorse is using the logarithm of Equation 26 of Rothwell: Computation of the logarithm of Bessel functions of complex argument and fractional order. which has (maintaining their notation) $$ K_\nu(z) = \frac{\sqrt{\pi}}{\Gamma(\nu + \frac{1}{2})}(2z)^{-\nu}e^{-z} \int_0^1 [\beta e^{-u^\beta}(2z + u^\beta)^{\nu-1/2} u^{n-1} + e^{-1/u} u^{-2\nu-1}(2zu + 1)^{\nu-1/2}] \mathrm{d}u $$ Where $\beta = \frac{2n}{2\nu+1}$ and the authors suggest using $n = 8$. You still can't push the log through the integral, but for large areas of the parameter space, it is OK to just compute the integral and take the log afterwards. The integral ceases to be numerically tractable around roughly $\nu > 50$ or when $\nu > 0.5$ AND $log(z) > \frac{300}{\nu - 0.5} - log(2)$ (one of the summands turn to infinity). Fruther, the formula ceases to be very accurate for small $z$ and small $\nu$, where below roughly $z < 10^-4$ the relative error starts climbing from around $10^{-30}$ to around $10{^-2}$ for $z < 10^{-7}$ There is a C++ implementation of the Rothwell formula at https://github.com/stan-dev/stan/wiki/Stan-Development-Meeting-Agenda/0ca4e1be9f7fc800658bfbd97331e800a4f50011 Case 2: $\alpha \gg x$ Here the log of the approximate formula you've written works OK, but I got a bit better results with the same formula as in Case 0. Case 3: $x \gg \alpha$ Asymptotic formula such as 10.4.2 here: https://dlmf.nist.gov/10.40 can be used. Once again, you need to compute the sum on non-log scale (it has negative elements), and take the log afterward, but this has worked great for me. Summing up to 10 first elements worked OK in my case. Remaining cases For large $x$ with comparably large $\alpha$, I still haven't found a reliable solution. Here is a plot of error with respect to recursive formula for consecutive neighbours with the approach I've described (ratio is the relative error of the actual value, I also test the gradient of this function wrt. both parameters as computed by Stan's autodiff - the gradient is a tougher problem, but probably not relevant here): The biggest relative error shown is 3.8e-02 for $\alpha = 148$ and $x = 105$.<|endoftext|> TITLE: Convex function can be written as supremum of some affine functions QUESTION [13 upvotes]: Let $\phi: \mathbb{R} \to \mathbb{R}$ be a convex function. Prove that $\phi$ can be written as the supremum of some affine functions $\alpha$, in the sense that $\phi(x) = \sup_\alpha \alpha(x)$ for every $x$, where each $\alpha$ is defined by$$\alpha: x \mapsto a_\alpha x + b_\alpha$$for some $a_\alpha$ and $b_\alpha$. My progress is as follows. I can show that if $\phi$ is convex and $x \in \mathbb{R}$, there exists a real number $c$ such that$$\phi(y) \ge \phi(x) + c(y - x)$$for all $y \in \mathbb{R}$. But I am at a loss on how to continue, how to finish. Could aybody help? REPLY [9 votes]: If $\phi$ is convex, for each point $(\alpha, \phi(\alpha))$, there exists an affine function $f_\alpha(x) = a_\alpha x + b_\alpha$ such that the line $L_\alpha$ corresponding to $f_\alpha$ passes through $(\alpha, \phi(\alpha))$; the graph $\phi$ lies above $L_\alpha$. Let $A = \{f_\alpha: \alpha \in \mathbb{R}\}$ be the set of all such functions. We have $\sup_{f_\alpha \in A} f_\alpha(x) \geq f_x(x) = \phi(x)$ because $f_x$ passes through $(x, \phi(x))$; $\sup_{f_\alpha \in A} f_\alpha(x) \leq \phi(x)$ because all $f_\alpha$ lies below $\phi$. We conclude that $\sup_{f_\alpha \in A} f_\alpha(x)= \phi(x)$.<|endoftext|> TITLE: Limits at infinity by rationalizing QUESTION [5 upvotes]: I am trying to evaluate this limit for an assignment. $$\lim_{x \to \infty} \sqrt{x^2-6x +1}-x$$ I have tried to rationalize the function: $$=\lim_{x \to \infty} \frac{(\sqrt{x^2-6x +1}-x)(\sqrt{x^2-6x +1}+x)}{\sqrt{x^2-6x +1}+x}$$ $$=\lim_{x \to \infty} \frac{-6x+1}{\sqrt{x^2-6x +1}+x}$$ Then I multiply the function by $$\frac{(\frac{1}{x})}{(\frac{1}{x})}$$ Leading to $$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{(\frac{-6}{x})+(\frac{1}{x^2})}+1}$$ Taking the limit, I see that all x terms tend to zero, leaving -6 as the answer. But -6 is not the answer. Why is that? REPLY [4 votes]: Your error is here: $$\frac{\sqrt{x^2-6x +1}-x}{x}=\sqrt{1-\frac{6}{x}+\frac{1}{x^2}}+1$$ REPLY [2 votes]: It leads to $$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{1-(\frac{6}{x})+(\frac{1}{x^2})}+1}$$ And so the limit is $-3$<|endoftext|> TITLE: Sum of the series $\binom{n}{0}-\binom{n-1}{1}+\binom{n-2}{2}-\binom{n-3}{3}+..........$ QUESTION [5 upvotes]: The sum of the series $$\binom{n}{0}-\binom{n-1}{1}+\binom{n-2}{2}-\binom{n-3}{3}+..........$$ $\bf{My\; Try::}$ We can write it as $\displaystyle \binom{n}{0} = $ Coefficient of $x^0$ in $(1+x)^n$ Similarly $\displaystyle \binom{n-1}{1} = $ Coefficient of $x^1$ in $(1+x)^{n-1}$ Similarly $\displaystyle \binom{n-2}{2} = $ Coefficient of $x^2$ in $(1+x)^{n-2}$ Now, how can I solve it after that, Help Required, Thanks REPLY [3 votes]: [Imported from a duplicate question] Chebyshev polynomials of the second kind have the following representation: $$ U_n(x)=\sum_{r\geq 0}\binom{n-r}{r}(-1)^r (2x)^{n-2r} \tag{1}$$ hence the wanted sum is just $U_n\left(\frac{1}{2}\right)$, and since $\frac{1}{2}=\cos\frac{\pi}{3}$, $$ U_n\left(\frac{1}{2}\right) = \frac{\sin((n+1)\pi/3)}{\sin(\pi/3)}.\tag{2} $$<|endoftext|> TITLE: If $(X_n)$ is i.i.d. and $ \frac1n\sum\limits_{k=1}^{n} {X_k}\to Y$ almost surely then $X_1$ is integrable (converse of SLLN) QUESTION [5 upvotes]: Let $(\Omega,\mathcal F,P)$ be a finite measure space. Let $X_n:\Omega \rightarrow \mathbb R$ be a sequence of iid r.v's I need to prove that if: $ n^{-1}\sum _{k=1}^{n} {X_k} $ converges almost surely to $Y$ then all $X_k$ have expectation. If I understand correctly then $X_k$ has expectations means $X_k$ is in $\mathcal L^1(\Omega)$. And I know that on finite measure space Converging in expectations is converging in $\mathcal L^1(\Omega)$ and it's stronger than almost sure convergence. And I know that from linearity of expectation even if one of the sequence is not in $\mathcal L^1(\Omega)$ then $Y$ is not in $\mathcal L^1(\Omega)$. How do I continue? REPLY [6 votes]: The statement is actually the converse of the strong law of large numbers. Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of iid random variables and suppose that the sequence $S_n := \sum_{j=1}^n X_j$ satisfies $n^{-1} S_n \xrightarrow[]{n \to \infty} Y$ almost surely for some random variable $Y$. Then $$\mathbb{E}(|X_1|)<\infty.$$ Proof: Since $$\frac{X_n}{n} = \frac{S_n}{n} - \frac{n-1}{n} \frac{S_{n-1}}{n-1}$$ we find that $X_n/n$ converges to $0$ almost surely; in particular, $$\mathbb{P} \left( \left| \frac{X_n}{n} \right| \geq 1 \, \, \text{infinitely often} \right)=0.$$ Applying the (converse) Borel-Cantelli lemma, we obtain $$\sum_{n \geq 1} \mathbb{P}(|X_1| \geq n) = \sum_{n \geq 1} \mathbb{P} \left( \left| \frac{X_n}{n} \right| \geq 1 \right) < \infty.$$ As $$\mathbb{E}(|X_1|) \leq 1 + \sum_{n \geq 1} \mathbb{P}(|X_1| \geq n)$$ (see e.g. this question for a proof of this inequality), this proves $\mathbb{E}(|X_1|)<\infty$.<|endoftext|> TITLE: Derivative of matrix multiplication w.r.t. a matrix - how to write? QUESTION [5 upvotes]: If I have a matrix $W$: $\begin{bmatrix} w_{00} & w_{01} & w_{02} \\ w_{10} & w_{11} & w_{12} \\ w_{20} & w_{21} & w_{22}\\ w_{30} & w_{31} & w_{32} \end{bmatrix} $ and a matrix $X$: $\begin{bmatrix} x_{00} & x_{01} & x_{02} & x_{03}\\ x_{10} & x_{11} & x_{12} & x_{13} \\ x_{20} & x_{21} & x_{22} & x_{23} \end{bmatrix} $ How do I write out the derivative of $Z=XW$ w.r.t. the matrix $W$? $Z$ I know is (3x3): $\begin{bmatrix} z_{00} & z_{01} & z_{02} \\ z_{10} & z_{11} & z_{12} \\ z_{20} & z_{21} & z_{22} \end{bmatrix} $ Since $Z$ is (3x3) is $\frac{\partial{Z}}{\partial{W}}$ a (9x12) or (12x9) matrix? REPLY [7 votes]: I think it is more appropriate in this case to work exclusively in matrix notation. Let me explain. You have a function $f : \mathrm{Mat}_{n \times p}(\mathbb R) \times \mathrm{Mat}_{p \times m}(\mathbb R) \to \mathrm{Mat}_{n \times m}(\mathbb R)$ sending a pair of matrices $(X,Y)$ to their product $f(X,Y) \overset{def}=XY$. In terms of differential geometry, if we are given a "point" in $\mathrm{Mat}_{n \times p}(\mathbb R) \times \mathrm{Mat}_{p \times m}(\mathbb R)$ (i.e. two matrices), the tangent space is canonically isomorphic to the space itself (since it is a linear manifold) and tangent vectors are just pairs of matrices. We also have a canonical basis consisting of the $(E_{i,j},0)$ and $(0, E_{k,\ell})$ where the indices $(i,j)$ range over $(1,1),\cdots,(n,p)$ and similarly, the indices $(k,\ell)$ range over $(1,1),\cdots,(p,m)$. Using the standard definition of directional derivative, $$ \frac{\partial f}{\partial (E_{i,j},0)} = \lim_{\varepsilon \to 0} \frac{(X+\varepsilon E_{i,j})Y - XY}{\varepsilon} = \lim_{\varepsilon \to 0} \frac{\varepsilon E_{i,j}Y}{\varepsilon} = E_{i,j}Y. $$ (Feel free to skip the differential geometry blabla if you agree with the latter equation.) Similarly, you can deduce that $\frac{\partial f}{\partial (0,E_{k,l})} = XE_{k,l}$. In the same way that the Jacobian matrix of a function $g : \mathbb R^n \to \mathbb R^m$ gives you an $m \times n$-matrix, the Jacobian matrix of the function $f$ gives us an $(np^2m) \times nm$ matrix, something quite discouraging. To enlighten us, we use the fact that our function $f$ is quadratic in the coefficients of $X$ and $Y$. Let us use the following formula to compute the "Taylor expansion" of this function at a pair of matrices $(X_0,Y_0)$: $$ XY - X_0Y_0 = (X-X_0 + X_0)(Y-Y_0 + Y_0) - X_0 Y_0 \\ = \underset{\text{Jacobian (linear) term}}{\underbrace{(X-X_0)Y_0 + X_0 (Y-Y_0)}} + \underset{\text{Hessian (quadratic) term}}{\underbrace{\frac 12 \left( \phantom{\int}\hspace{-9 pt}2(X-X_0)(Y-Y_0) \right)}} $$ This suggests that $J_f(X_0,Y_0)(X,Y) = XY_0 + X_0Y$ and $H_f(X_0,Y_0)((X,Y),(X,Y)) = 2XY$ (note that we need two pairs of matrices as arguments since the Hessian is a quadratic form, and we kept the arguments equal for the moment!). This is simply an application of the standard formula for vectors $x_0,x \in \mathbb R^n$ where $g : \mathbb R^n \to \mathbb R^m$ $$ g(x) = g(x_0) + J_g(x_0)(x-x_0) + \frac 12 H_g(x_0)(x-x_0,x-x_0) $$ The tensor here $H_g(x_0)$ is of order $3$ ; think of each coordinate of $\mathbb R^m$ has having its own Hessian matrix, and $H_g$ is those $m$ matrices patched together. If for some reason you are interested in the Hessian, note that for vectors $x_0,x, x' \in \mathbb R^n$ where $g : \mathbb R^n \to \mathbb R^m$ $$ \frac 12 H_g(x_0)(x,x') = \sum_{i,j=1}^n x_i x_j' \frac{\partial^2 g}{\partial x_i \partial x_j}(x_0) $$ so if we repeat the same idea but for our function, we get the formula $$ \frac 12 H_f(X_0,Y_0)((X,Y),(X',Y')) = \sum_{(i,j),(k',\ell')} x_{ij} y'_{k',\ell'} E_{ij} E_{k',\ell'} = \left( \sum_{(i,j)} x_{ij} E_{ij} \right) \left( \sum_{(k',\ell')} y'_{k',\ell'} E_{k',\ell'} \right) = XY'. $$ The reason why those are the only terms appearing in the sum is because for the other ones, the partial derivatives of the second order of $f$ vanish. Multiplying the above by $2$ generalizes our formula obtained via Taylor expansion (because we only had dealt with the case where $(X,Y) = (X',Y')$). In particular, the Hessian is a constant tensor of total order $3$ (i.e. does not depend on $X_0$ or $Y_0$), which is a characteristic property of quadratic functions. To understand what "of total order $3$" means, consider this idea : if you have a vector, taking inner products with a vector of same length (a tensor of order $1$) gives you a number. If you have a matrix, taking the products with two vectors, one for each dimension of the matrix, you get back a scalar. In the case of our above Hessian, taking two vectors $(X,Y)$ and $(X',Y')$ and taking the appropriate products gives $XY'$, another vector (in the vector space $\mathrm{Mat}_{n \times m}(\mathbb R)$). See this for more on tensors. But now that this trick is dealt with, back to the original question : what is $\frac{\partial XY}{\partial X}(X_0,Y_0)$? Come back for a moment to the definition of directional derivative. What we do in this context is consider a function $g(x,y)$ of two variables as a function of a single variable $x$ to evaluate the partial derivative at $(x_0,y_0)$. We can do the same here using the concept of Fréchet derivative instead! This also means that $\frac{\partial f}{\partial X}$ is not a number or a matrix, but a linear operator on the space of matrices corresponding to $X$. For instance, $f(X,Y) = XY$ satisfies $\frac{\partial f}{\partial X}(X_0,Y_0)(X) = XY_0$ since $$ \lim_{Z \to 0} \frac{\|(X_0+Z)Y_0 - X_0Y_0 - \frac{\partial f}{\partial X}(X_0,Y_0)(Z)\|}{\|Z\|} = \lim_{Z \to 0} \frac{\|(X_0+Z)Y_0 - X_0Y_0 - ZY_0\|}{\|Z\|} = 0. $$ (The latter is true under any choice of matrix norms.) Similarly, $\frac{\partial f}{\partial Y}(X_0,Y_0)(Y) = X_0Y$. You can still apply the chain rule with this partial derivative, but you need to worry~; when you had a composition of functions, you multiplied the Jacobian matrices before. In this case, you need to compose the linear operators, so this might mean something a bit different in the context. For instance, if $X,Y$ are both functions of a real variable $t$, then $$ \frac{\partial X(t)Y(t)}{\partial t}(t_0) = \frac{\partial XY}{\partial X}(X(t_0),Y(t_0)) \left( \frac{\partial X(t)}{\partial t}(t_0) \right) + \frac{\partial XY}{\partial Y}(X(t_0),Y(t_0)) \left( \frac{\partial Y(t)}{\partial t}(t_0) \right) \\ = X'(t) Y(t) + X(t) Y'(t). $$ (Note that this is what you would expect!) As an exercise, using this answer, you could prove that if $X_0,Y_0$ are matrices such that $X_0Y_0$ is a well-defined square invertible matrix, then $$ \frac{\partial \det(XY)}{\partial X}(X_0,Y_0)(X) = \mathrm{tr}(\mathrm{adj}(X_0Y_0)(XY_0)) = \det(X_0Y_0) \, \mathrm{tr}( (XY_0)(X_0Y_0)^{-1}). $$ Hope that helps,<|endoftext|> TITLE: Modules over Group algebra / representations example QUESTION [7 upvotes]: I know that we have a bijection between moduls over a group algebra and (G-linear) representations of a group but I've got problems understanding this. Is there an example (which is not trivial) where group, representations, group algebra and its modules are well-defined? Whenever I read something about this, the "rest of the details are left as an exercise". I'm quite dumb, but I really want to understand this. Thank you in advance :) REPLY [6 votes]: Let's get some things straight first. $G$ is a group and $\Bbbk$ a field. A $G$-linear representation is a $\Bbbk$-vector space on which $G$ acts by linear automorphisms. I will denote such an action by a dot, i.e. $G\times V\to V$ is given by $(g,v)\mapsto g.v$. The group algebra of $G$ is the $\Bbbk$-vector space $\Bbbk[G]$ with basis $G$, where a multiplication is defined via $$ \left(\sum_{g\in G} a_g g\right) \times \left(\sum_{g\in G} b_g g\right) := \sum_{g\in G} \left(\sum_{h\in G} a_{h}b_{(\smash{h^{-1}g})} \right) g$$ In particular, $g\times h=gh$. A $\Bbbk[G]$-module $V$ has a scalar multiplication by elements of $\Bbbk[G]$, I will denote this by a centered dot. Example 1. The simplest group is $G=\{1\}$, every $\Bbbk$-vector space is a linear $G$-representation, the group algebra is equal to $\Bbbk$ itself and this makes perfect sense, but is also quite boring. Example 2. The second simplest group is $G=\Bbb Z/2\Bbb Z\cong\{1,\tau\}$ where $\tau^2=1$. Here, a linear $G$-representation is a vector space $V$ on which $\tau$ acts as an involution, i.e. $\tau$ corresponds to an invertible linear map $T\colon V\to V$ with $T\circ T =\operatorname{id}_V$. Indeed, I define the map $T$ as $T(v):=\tau.v$. Hence, we can understand the linear $G$-representations as tuples $(V,T)$ where $V$ is a $\Bbbk$-vector space and $T$ is a linear involution on $V$. On the other hand, a $R:=\Bbbk[G]\cong\Bbbk[x]/\langle x^2-1\rangle$ (polynomial ring quotient) where $\tau$ corresponds to the image of $x$. Indeed, consider the (surjective) $\Bbbk$-algebra homomorphism $\phi:\Bbbk[x]\to R$ which maps $x\mapsto \tau$. Since $\phi(x^2-1)=\phi(x)^2-1=\tau^2-1=0$, we have $x^2-1\in\ker(\phi)$. On the other hand, $\Bbbk[x]/\ker(\phi)\cong\Bbbk[G]=\Bbbk\oplus\Bbbk\tau$ has dimension $2$ as a vector space. Hence, it is the quotient $\Bbbk[x]$ by a polynomial of degree $2$, hence it follows that $\ker(\phi)=\langle x^2-1\rangle$. Now, what are $R$-modules? Any $R$-module $V$ must be a $\Bbbk$-vector space because we can just restrict scalar multiplication from $R\supseteq\Bbbk$. Then, the only thing that remains to be said is what the scalar multiplication $\tau\cdot v$ means. because once this is defined, we have $(a+b\tau)\cdot v= a\cdot v + b\cdot(\tau\cdot v)$ and scalar multiplication by $a,b\in\Bbbk$ is already defined. Furthermore, we know that the map $T\colon V\to V$ given by $T(v):=\tau\cdot v$ must be an involution, because $$T(T(v))=\tau\cdot\tau\cdot v = \tau^2 \cdot v = 1\cdot v = v.$$ Hence, an $R$-module $V$ is also just a vector space with a fixed linear involution $T:V\to V$. Example 3. You can work out Example 2 for any cyclic group $G=\langle \tau\rangle$ of order $n$ - You will get tuples $(V,T)$ where $V$ is a $\Bbbk$-vectr space and $T$ is an invertible endomorphism with $T^n=\operatorname{id}_V$. The group algebra is $\Bbbk[G]=\Bbbk[x]/\langle x^n-1\rangle$. After these examples, it quickly becomes complicated and also quite interesting. It is not an easy task to understand all representations of most "interesting" groups. However, the correspondence between modules over the group algebra and representations of the group is always the same formal correspondence: Every module over $\Bbbk[G]$ is particular a $\Bbbk$-vector space, because $\Bbbk\subseteq\Bbbk[G]$. Then, every group element $g\in G$ corresponds to some linear automorphism of $V$ and these automorphisms have to satisfy the relations of the group. In other words, it all boils down to a group homomorphism $\rho:G\to\operatorname{GL}(V)$. From such a group homomorphism, you can get a $G$-action: \begin{align*} G\times V &\longrightarrow V\\ (g,v) &\longmapsto \rho(g)(v) \end{align*} and also a $\Bbbk[G]$-module structure: \begin{align*} \left(\sum_{g\in G} a_g g\right) \cdot v &= \sum_{g\in G} a_g\cdot \rho(g)(v). \end{align*} The whole thing works vice-versa in both cases: Given a $G$-action on $V$, just define $\rho(g)$ to be the map $\rho(g)(v)=g.v$ and when $V$ is a $\Bbbk[G]$-module, just define $\rho(g)$ to be the map $\rho(g)(v)=g\cdot v$. Alright. Hope this helps. REPLY [5 votes]: Let me explain what that famous bijection really is. It comes from some common adjunction of functors! Usually one defines a linear representation as a group homomorphism $$\rho\colon G\to \operatorname{GL} (V).$$ Note that $\operatorname{GL} (V)$ is (by definition) the group of units (invertible elements) in the $k$-algebra $\operatorname{End} (V)$ of $k$-linear endomorphisms $V\to V$. Now comes a little bit of category theory: forming the group algebra $k [G]$ of a group $G$ is a functor $$k [-]\colon \mathit{groups} \to k\mathit{-algebras},$$ and taking the group of units is a functor $$(-)^\times\colon k\mathit{-algebras} \to \mathit{groups}.$$ It is easy to check (!) that these two functors are adjoint, in the sense that there is a natural bijection between morphisms $$\operatorname{Mor}_{k\mathit{-algebras}} (k[G], A) \cong \operatorname{Mor}_{\mathit{groups}} (G, A^\times)$$ (hint: all elements of $G$ are invertible in $k[G]$). If we specify this to $A = \operatorname{End} (V)$, we get $$\operatorname{Mor}_{k\mathit{-algebras}} (k[G], \operatorname{End} (V)) \cong \operatorname{Mor}_{\mathit{groups}} (G, \operatorname{GL} (V)).$$ Now on the right hand side we immediately recognize representations of $G$, and on the left hand side we have $k[G]$-modules $V$. Maybe one specific instructive example is the following: $k[G]$ is naturally a module over itself (left or right). This corresponds to what we call the (left or right) regular representation of $G$. Other examples? Well, start from any representation of $G$, and then it gives you some $k[G]$-module. But since in this correspondence you just automatically extend the action of $G$ on $V$ to the action of $k[G]$, I think such examples are not very interesting for understanding the bijection we are talking about.<|endoftext|> TITLE: Putnam Challenge Question QUESTION [5 upvotes]: Determine all real numbers $a$, where $a>0$, for which there exists a nonnegative continuous function $f(x)$, defined on $[0,a]$ with the property that the region $$R=\big\{(x,y)\,\big|\, 0\le x\le a\text{ and } 0\le y\le f(x)\big\}$$ has perimeter $k$ units and area $k$ units for some real number of $k$. I apologize for not yet knowing how to type equations as they are seen in most questions on this forum. I joined about 15 minutes ago. REPLY [3 votes]: We first claim that $a>2$ must be satisfied for such a function $f$ to exist. Suppose that $f:[0,a]\to\mathbb{R}_{\geq 0}$ satisfies the condition that the perimeter $\sigma$ of $R=\big\{(x,y)\in [0,a]\times \mathbb{R}_{\geq 0}\,\big|\,y\leq f(x)\big\}$ equals its area $$\alpha:=\displaystyle \int_0^a\,f(x)\,\text{d}x\,.$$ Write $m$ for the maximum value of $f$ in the interval $[0,a]$ (which exists as $f$ is continuous and $[0,a]$ is a compact space). Let $c\in[0,a]$ be such that $f(c)=m$. Then, $$\sigma \geq a+f(0)+f(a)+\sqrt{c^2+\big(m-f(0)\big)^2}+\sqrt{(a-c)^2+\big(m-f(a)\big)^2}\,.$$ Using the Triangle Inequality, we get that $$\begin{align}\sigma &\geq a+f(0)+f(a) +\sqrt{\big(c+(a-c)\big)^2+\Big(\big(m-f(0)\big)+\big(m-f(a)\big)\Big)^2}\\ &\geq a+f(0)+f(a)+\sqrt{a^2+\big(2m-f(0)-f(a)\big)^2}\\ &>a+f(0)+f(a)+\big(2m-f(0)-f(a)\big)=a+2m>2m\,. \end{align}$$ On the other hand, we see that $$\alpha \leq \int_0^a\,m\,\text{d}x=am\,.$$ Because $\alpha=\sigma$, we conclude that $$am\geq \alpha = \sigma>2m\text{ or }a>2\,,$$ since $m$ is clearly a positive real number. To finish the proof, we shall now verify that, when $\alpha>2$, such a function $f$ exists. As discovered in the comment section, the constant function $f$ defined via $$f(x)=\frac{2a}{a-2}\text{ for all }x\in[0,a]$$ has the required property (with $k=\dfrac{2a^2}{a-2}$).<|endoftext|> TITLE: Suppose $\lim_{x\rightarrow\infty} \frac{f(x)}{g(x)}=1$,is $\lim f-g=0$? QUESTION [5 upvotes]: I was wondering if it is possible that $\lim_\limits{x\rightarrow\infty} \frac{f(x)}{g(x)}=1$, but $\lim_\limits{x\rightarrow\infty} ({f(x)}-{g(x)})\neq 0$ or even be non existent. It seems intuitive that the result will always be zero and indeed is easy to prove when both the limits of $f$ and $g$ exist, however I can't prove it in the case the limits do not exist. So I am not sure if it possible that the result is not zero or the limit to be nonexistent. Thanks in advance. REPLY [3 votes]: If $f(x)/g(x) = 1$ then $f(x) = g(x)$ and $f(x) - g(x) = 0$ and many many students so desperately want this logical inference to be valid in limit operations also. Such a desire stems from the resistance to ditch the algebraical approach of $+,-,\times,/,=$ when one starts studying calculus. When we deal with limits it is essential to understand that they are a non-algebraic concept based on order relations of $<, >$ and one should expect that they behave differently. Thus one should not be surprised if $$\lim_{x \to \infty}\frac{f(x)}{g(x)} = 1$$ does not imply that $$\lim_{x \to \infty}\{f(x) - g(x)\} = 0$$ We can see that $$\lim_{x \to \infty}\{f(x) - g(x)\} = \lim_{x \to \infty}g(x)\left(\frac{f(x)}{g(x)} - 1\right) = \lim_{x \to \infty}g(x) h(x)$$ where $h(x) = f(x)/g(x) - 1$. Now the limit of $h(x)$ is $0$ but it does not necessarily mean that limit of $g(x)h(x)$ is also $0$. One can't make an inference like this without knowing further about $g(x)$.<|endoftext|> TITLE: Lemma on switching between mod $p$ and mod $p^2$ or mod $p^3$ QUESTION [7 upvotes]: Can someone help me prove the following lemma? Also can it be strengthened? Let $p\geq 5$ be a prime number. Prove that if $p|a^2+ab+b^2$, then $p^3|(a+b)^p-a^p-b^p$ Here is what I tried: We want to show $$p^3|a^{p-1}\binom{p}{1}b+a^{p-2}\binom{p}{2}b^2+\dots+a\binom{p}{p-1}b^{p-1}$$ It is easy to see that all the terms in the expression divide $p$, so we want to show: $$\begin{align*} a^{p-1}b\frac{(p-1)!}{(p-1)!}+a^{p-2}b^2\frac{(p-1)!}{2!(p-2)!}+a^{p-3}b^3\frac{(p-1)!}{3!(p-3)!}+\dots &\equiv a^{p-1}b-a^{p-2}b^2\frac{1}{2!(p-2)!}\dots \\ &\equiv 0\pmod{p} \end{align*}$$ from Wilson's Theorem. But I do not know how to do that, as the expression is quite ugly. Also, we ultimately want to show it is divisble by $p^3$ and not just $p$. Finally, I could not find a way to use the given condition. Any ideas are appreciated. I found the lemma in a solution to the problem here: http://artofproblemsolving.com/community/c6h514444p2890151 REPLY [5 votes]: Here is the proof anticipated by Stefan4024, based on this question linked to in his answer. We first show that if $p \equiv 1 \pmod 6$, then $$ p(a^2 + ab + b^2)^2 \,\mid\, (a+b)^p - a^p - b^p . $$ Consider some fixed $b$, and let $f(x)$ be the polynomial $$ f(x) = (x + b)^p - x^p - b^p. $$ Let $\omega = e^{2\pi i / 3}$ be a primitive third-root of unity, and note that since $1 + \omega + \omega^2 = 0$, that we have that $1 + \omega = -\omega^2$ is a sixth-root of unity. We will show that $\omega b$ is a root of both $f(x)$, and its derivative $f^\prime (x)$. We have that $f(\omega b)$ is equal to $$ (\omega b + b)^p - (\omega b)^p - b^p = b^p \left( (1 + \omega)^p - \omega^p - 1 \right) $$ Since $\omega^3 = (1 + \omega)^6 = 1$, and $p-1$ is divisible by $6$, we have that $\omega^p = \omega$, and $(1 + \omega)^p = (1 + \omega)$. Thus we have that $$ f(\omega b) = b^p \left( (1 + \omega) - \omega - 1 \right) = 0. $$ Thus $\omega b$ is a root of $f$. Similarly, $$ f^\prime (\omega b) = p (\omega b + b)^{p-1} - p(\omega b)^{p-1} = p b^{p-1} \left( (1 + \omega)^{p-1} - \omega^{p-1} \right) = p b^{p-1} (1 - 1) = 0. $$ Thus $\omega b$ is a root of both $f$ and $f^\prime$, from which it follows that $(x - \omega b)^2$ is a factor of $f$. Since $f$ has real coefficients, $(x - \bar{\omega})^2$ is also a factor of $f$, and we see that $(x^2 + xb + b^2)^2$ is a factor of $f$. Now it is a well-known fact that for $1 \leq k \leq (p-1)$, that the binomial coefficient $\binom{p}{k}$ is divisible by $p$, and so we see by the binomial theorem that all of the coefficients of $f$ are divisible by $p$. Thus $\frac{1}{p} f$ is a polynomial with integer coefficients, and is divisible by $(x^2 + xb + b^2)^2$, which is a monic polynomial with integer coefficients. It follows that we can write $$ \frac{1}{p} f(x) = (x^2 + xb + b^2)^2 \cdot g(x) $$ where $g(x)$ is some polynomial with integer coefficients. From this it follows easily that $f(a) = (a + b)^p - a^p - b^p$ is divisible by $p(a^2 + ab + b^2)^2$ if $p \equiv 1 \pmod 6$. Now suppose that $p \,\mid\, a^2 + ab + b^2$. We will show that $p \equiv 1 \pmod 6$, so that $$ p^3 \,\mid\, p(a^2 + ab + b^2)^2 \,\mid\, (a + b)^p - a^p - b^p . $$ We note that $$ p \,\mid\, 4a^2 + 4ab + 4b^2 = (2a + b)^2 + 3b^2. $$ If $p \,\mid\, b$, then we see that we must also have that $p \,\mid\, a$, and so $(a + b)^p - a^p - b^p$ is divisible by $p^p$, and so is certainly divisible by $p^3$. Suppose now that $b$ is not divisible by $p$. Then we have that $$ \left((2a + b) \cdot b^{-1} \right)^2 \equiv -3 \pmod p $$ where $b^{-1}$ is the multiplicative inverse of $b$ modulo $p$, and so we see that $-3$ is a quadratic residue modulo $p$. Thus $$ \left( \frac{-1}{p} \right)\left( \frac{3}{p} \right) = \left( \frac{-3}{p} \right) = 1 $$ where $\left( \frac{\;}{} \right)$ is the Jacobi symbol. If $p \equiv 1 \pmod 4$, then $$ \left( \frac{-1}{p} \right) = 1 $$ and by quadratic reciprocity, $$ \left( \frac{3}{p} \right) = \left( \frac{p}{3} \right) = \begin{cases} 1 & \text{ if } p \equiv 1 \pmod 3 \\ -1 & \text{ if } p \equiv 2 \pmod 3 \end{cases}. $$ We see that in this case, we must have that $p \equiv 1 \pmod 3$. On the other hand, if $p \equiv 3 \pmod 4$, then we know that $$ \left( \frac{-1}{p} \right) = -1 $$ and so we must have $$ \left( \frac{3}{p} \right) = -1. $$ Since $3$ and $p$ are both $3$ mod $4$, quadratic reciprocity in this case gives us that $$ -1 = \left( \frac{3}{p} \right) = -\left( \frac{p}{3} \right), $$ and so we again have that $p \equiv 1 \pmod 3$. In either case, we see that $p \equiv 1 \pmod 3$, and so $p \equiv 1 \pmod 6$, and the result follows.<|endoftext|> TITLE: Does this operation exist? What's its name? QUESTION [5 upvotes]: I need to do something like this $$ \begin{bmatrix} A \\ B \\ C \\ \end{bmatrix} \begin{bmatrix} x_1 & x_2 & \cdots & x_n \\ y_1 & y_2 & \cdots & y_n \\ z_1 & z_2 & \cdots & z_n \\ \end{bmatrix} = \begin{bmatrix} A x_1 & A x_2 & \cdots & A x_n \\ B y_1 & B y_2 & \cdots & B y_n \\ C z_1 & C z_2 & \cdots & C z_n \\ \end{bmatrix} $$ You get the idea. I want to know if this operation already has a name, in order to see if my linear algebra library already supports it. REPLY [10 votes]: Note that your RHS can be obtained by matrix multiplication: $$ \begin{bmatrix}A&0&0\\0&B&0\\0&0&C\end{bmatrix} \begin{bmatrix} x_1&x_2&\cdots&x_n\\ y_1&y_2&\cdots&y_n\\ z_1&z_2&\cdots&z_n \end{bmatrix}. $$ The left matrix is a block matrix (in fact a block diagonal matrix). Your linear algebra library will likely have a way to construct these from an array of smaller matrices, and then you can use matrix multiplication. REPLY [3 votes]: For the sake of another answer, you might want to take a look at Hadamard product of matricies. Basically you are doing the Hadamard product for $[A,B,C]^T$ and each column of the second matrix. [Added:] If you know MATLAB, you might want to take a look at the Element-wise multiplication.<|endoftext|> TITLE: Is there any elegent way to formally prove that the ring $\mathbb Z/(8)$ cannot be decomposed to a product of rings? QUESTION [5 upvotes]: So far, my idea is that: As a additive group, $\mathbb Z/(8)$ is the cyclic group $\mathbb Z_8$. So to find two rings that give a product ring that is isomorphic to $\mathbb Z_8$. The order of the product ring must be 8. We know that $|H*K| = {|H| \; |K|}$. As the factors of 8 are 1,2,4,8, and we cannot involve the trivial ring, we need to try groups of order 2 and order 4. There is only one structure of group of order 2:$\mathbb Z_2$. But there are two structures of group of order 4: $\mathbb Z_4$ and the Klein four group. Neither of these $2$ groups gives us a cyclic group of order $8$. Hence the $\mathbb Z_8$ cannot be decomposed to the product of 2 rings. But I find this method tedious... Could someone give a simpler and nicer way to prove it? Thanks so much! REPLY [2 votes]: My favorite way to do this is that ring decompositions of a ring correspond to central idempotents. Nontrivial decompositions correspond to nontrivial idempotents. In this case, six trivial checks suffice to prove it can't be decomposed: $$ 2^2\equiv 4\\ 3^2\equiv 1\\ 4^2\equiv 0\\ 5^2\equiv 1\\ 6^2\equiv 4\\ 7^2\equiv 1\\ $$ Or, if that's too ugly, think about what it would mean to be idempotent: If $8$ divides $x-x^2 = x(1-x)$ would imply that $2$ divides one of $x$ or $1-x$. Of course it can't divide both, so once you decide which one is divisible by $2$, it is also divisible by $8$ (and hence is zero in $\mathbb Z/(8)$.) At that point you know one of $x$ and $1-x$ is zero and the other is $1$. Another reason that this ring can't be decomposed is that it's a local ring, that is, it has a unique maximal ideal. Any ring with a nilpotent maximal ideal is local, and that's the case here. However the check above works even for rings that aren't local, like integral domains. (And it is not always necessary to manually check every element as I did above.)<|endoftext|> TITLE: An identity about Vandermonde determinant. QUESTION [7 upvotes]: I want to prove the following identity $\sum_{i=1}^{k}x_i\Delta(x_1,\ldots,x_i+t,\ldots,x_k)=\left(x_1+x_2+\cdots+x_k+{k \choose 2}t\right)\cdot\Delta(x_1,x_2,\ldots,x_k),$ where we write $\Delta(l_1,\ldots,l_k)$ for $\prod_{i TITLE: Find the limit of the following expression: QUESTION [11 upvotes]: $$\lim_{n\to\infty}\frac {1-\frac {1}{2} + \frac {1}{3} -\frac {1}{4}+ ... + \frac {1}{2n-1}-\frac{1}{2n}}{\frac {1}{n+1} + \frac {1}{n+2} + \frac {1}{n+3} + ... + \frac {1}{2n}}$$ I can express the value of the geometric sum of ${\frac {1}{2} + \frac {1}{4}+...+\frac {1}{2n}}$ but the others are ahead of me. Putting both fraction parts under a common denominator makes that part tidy but the numerator seems to get way too complicated, which makes me think there is some simple way to do this. REPLY [27 votes]: Here's a solution with no calculus, just algebra. The numerator and the denominator are actually equal for all $n.$ Let $H_n=\sum_{k=1}^n \frac1{k}$ be the $k^{\text{th}}$ harmonic number. Your denominator is $$D_n=\big(1+\frac12+\frac13+\dots+\frac1{2n}\big)-\big(1+\frac12+\frac13+\dots+\frac1{n}\big)=H_{2n}-H_n.$$ Your numerator is $$N_n=1-\frac12+\frac13-\frac14+\dots+\frac1{2n-1}-\frac1{2n}=\sum_{k=1}^{2n} \frac{(-1)^{k+1}}{k}.$$ Compute \begin{align}H_{2n}-N_n &=\big(1+\frac12+\frac13+\dots+\frac1{2n}\big)-\big(1-\frac12+\frac13-\frac14+\dots+\frac1{2n-1}-\frac1{2n}\big) \\ &= 2\cdot\!\!\!\!\!\sum_{\substack{1\le k \le 2n\\k\text{ is even}}}\frac1{k} \scriptsize\quad\quad{\text{ (because the odd terms cancel out and the even terms are doubled up)}} \\&=2\cdot\sum_{j=1}^n\frac1{2j} \\&=\sum_{j=1}^n \frac1{j} \\&=H_{n}, \end{align} so $$ N_n=H_{2n}-H_n.$$ It follows that $N_n=D_n$ for all $n,$ so $N_n/D_n$ is a constant sequence with value always $1,$ and the limit is therefore $1.$<|endoftext|> TITLE: About the first positive root of $\sum_{k=1}^n\tan(kx)=0$ QUESTION [14 upvotes]: I am looking for the first positive solution $x_n$ of the equation $$f_n(x)=\sum_{k=1}^n\tan(kx)=0 \qquad \qquad (n\geq 2)$$ It is simple to show that $$\frac \pi{2n} 0$, so $f_n$ maps each interval between consecutive poles diffeomorphically onto $\mathbb{R}$. The case $n = 1$ is trivial ($f_1 = \tan$), and for $n \geqslant 2$ the two smallest positive poles are at $\frac{\pi}{2n}$ and $\frac{\pi}{2(n-1)}$. Since $f_n(0) = 0$ and $f_n' > 0$ on $\mathbb{R}\setminus P$, we have $f_n(x) > 0$ for $0 < x < \frac{\pi}{2n}$, so it follows that $x_n \in \bigl(\frac{\pi}{2n},\frac{\pi}{2(n-1)}\bigr)$, and $f_n$ has - for $n \geqslant 2$ - no other zeros in that interval. It is easily seen that $x_2 = \frac{\pi}{3}$, so in the following we assume $n \geqslant 3$. Since \begin{align} f_n \biggl(\frac{\pi}{2n-1}\biggr) &= f_{n-2}\biggl(\frac{\pi}{2n-1}\biggr) + \tan \frac{\pi(n-1)}{2n-1} + \tan \frac{\pi n}{2n-1}\\ &= f_{n-2}\biggl(\frac{\pi}{2n-1}\biggr) + \tan \biggl(\frac{\pi}{2} - \frac{\pi}{4n-2}\biggr) + \tan \biggl(\frac{\pi}{2} + \frac{\pi}{4n-2}\biggr)\\ &= f_{n-2}\biggl(\frac{\pi}{2n-1}\biggr)\\ &> 0, \end{align} it follows that $x_n \in \bigl(\frac{\pi}{2n}, \frac{\pi}{2n-1}\bigr)$. Write $x_n = \frac{1}{n}\bigl(\frac{\pi}{2} + \delta_n\bigr)$. Then $0 < \delta_n < \frac{\pi}{4n-2}$, in particular $\delta_n < \frac{3}{5} x_n$. We use $\tan \bigl(\frac{\pi}{2} - x\bigr) = \cot x$, and $$\frac{1}{x} - \frac{x}{2} < \cot x < \frac{1}{x} - \frac{x}{3}\tag{1}$$ for $0 < x < \frac{\pi}{2}$ in our calculations. First we have $$-\tan (n x_n) = -\tan \biggl(\frac{\pi}{2} + \delta_n\biggr) = \cot \delta_n = \frac{1}{\delta_n} + O(\delta_n).\tag{2}$$ To determine the asymptotic behaviour of $\delta_n$, we next note that (since $f_n(x_n) = 0$) \begin{align} -\tan (nx_n) &= \sum_{k = 1}^{n-1} \tan (k x_n)\\ &= \sum_{m = 1}^{n-1} \tan \bigl((n-m)x_n\bigr)\\ &= \sum_{m = 1}^{n-1} \tan \biggl(\frac{\pi}{2} - (mx_n - \delta_n)\biggr)\\ &= \sum_{m = 1}^{n-1} \cot (mx_n - \delta_n). \end{align} The inequalities $(1)$ now yield $$\sum_{m = 1}^{n-1} \frac{1}{m x_n - \delta_n} - \frac{1}{2} \sum_{m = 1}^{n-1} (m x_n - \delta_n) < -\tan (n x_n) < \sum_{m = 1}^{n-1} \frac{1}{m x_n - \delta_n} - \frac{1}{3} \sum_{m = 1}^{n-1} (m x_n - \delta_n).\tag{3}$$ We find $$\sum_{m = 1}^{n-1} (m x_n - \delta_n) = \frac{n(n-1)}{2}x_n - (n-1)\delta_n = (n-1)\frac{\pi - 2\delta_n}{4}$$ and \begin{align} \sum_{m = 1}^{n-1} \frac{1}{m x_n -\delta_n} &= \sum_{m = 1}^{n-1} \frac{1}{m x_n} + \frac{\delta_n}{x_n^2} \sum_{m = 1}^{n-1} \frac{1}{m\bigl(m - \frac{\delta_n}{x_n}\bigr)}\\ &= \frac{\log n + \gamma + O(n^{-1})}{x_n} + O(\delta_n x_n^{-2}). \end{align} With $x_n^{-1} \sim \frac{2}{\pi} n$ it follows that $$\frac{1}{\delta_n} \sim - \tan (n x_n) = \frac{2}{\pi}n \log n + O(n),$$ or $$\delta_n = \frac{\pi}{2n\log n}\bigl( 1 + O\bigl((\log n)^{-1}\bigr)\bigr).\tag{4}$$ With some tedious work, we can get some bounds on the $O\bigl((\log n)^{-1}\bigr)$ term in $(4)$, but since $\frac{\pi}{12} < \frac{2\gamma}{\pi} < \frac{\pi}{8}$, what we have isn't sufficient to even determine whether it is $\Theta\bigl((\log n)^{-1}\bigr)$. However, $(4)$ suffices to show that $$\begin{split}x_n &= \frac{1}{n}\biggl(\frac{\pi}{2} + \delta_n\biggr) = \frac{\pi}{2n}\biggl(1 + \frac{1 + O\bigl((\log n)^{-1}\bigr)}{n\log n}\biggr)\\ &= \frac{\pi}{2n\bigl(1 - \frac{1 + O((\log n)^{-1})}{n\log n}\bigr)} = \frac{\pi}{2\bigl(n - (\log n)^{-1} + O((\log n)^{-2})\bigr)}.\end{split}\tag{5}$$ Concerning the difference between the empirical best-fit constants and the exact asymptotic values, recall Legendre's constant. The logarithm is a very slowly varying function, to eliminate effects of constants from empirical estimates, one may need very large numbers. However, it may be that $250000$ is large enough, and the difference between the best-fit and the exact values is caused by the larger deviation from the asymptotic behaviour for the smaller $n$. Try a best-fit for e.g. $200000 \leqslant n \leqslant 250000$ to see what that gives.<|endoftext|> TITLE: The Proximal Operator of the $ {L}_{1} $ Norm Function QUESTION [7 upvotes]: Write down explicitly the optimal solutions to the Moreau-Yosida regularization of the function $f(x)=\lambda\|x\|_1$, where $f:\mathbb{R}^n\to(-\infty,+\infty]$. I have found that the answer is $x_i=sgn(x_i)\max\{|x_i|-1,0\}$ Here is my attempt to get the answer: The proximal operator to $f(x)$ is $\min_{y\in\mathbb{R}^n}\lambda\|y\|_1+\frac{1}{2}\|y-x\|^2_1$. I need to minimize this over y. I have no idea how to continue. I have read a lot of references but I cannot find an explicit step-by-step solution to this problem. Any help would be appreciated! REPLY [12 votes]: The Optimization Problem given by the Prox Operator: $$ \operatorname{Prox}_{\lambda {\left\| \cdot \right\|}_{1}} \left( x \right) = \arg \min_{u} \left\{ \frac{1}{2} {\left\| u - x \right\|}^{2} + \lambda {\left\| u \right\|}_{1} \right\} $$ This problem is separable with respect to both $ u $ and $ x $ hence one could solve the following problem: $$ \arg \min_{ {u}_{i} } \left\{ \frac{1}{2} {\left( {u}_{i} - {x}_{i} \right)}^{2} + \lambda \left| {u}_{i} \right| \right\} $$ Now, you can proceed using First Order Optimality Condition and the Sub Gradient of the $ \operatorname{abs} \left( \cdot \right) $ function or you can employ simple trick. The trick is to understand that $ {u}_{i} $ can be either positive, zero or negative. Assuming $ {u}_{i} > 0 $ the derivative is given by $ {u}_{i} - {x}_{i} + \lambda $ which vanishes for $ {u}_{i} = {x}_{i} - \lambda $ and holds for $ {x}_{i} > \lambda $. The same procedure for the case $ {u}_{i} < 0 $ yields $ {u}_{i} = {x}_{i} + \lambda $ for $ {x}_{i} < -\lambda $. For values of $ {x}_{i} $ in between, since ${u}_{i} = 0 $ and hence the derivative (Sub Gradient) of $ {u}_{i} $ can freely be chosen on the range $ \left[ -1, 1 \right] $ the value of $ {u}_{i} = 0 $ holds. In summary: $$ \operatorname{Prox}_{\lambda {\left\| \cdot \right\|}_{1}} \left( x \right)_{i} = \operatorname{sign} \left( {x}_{i} \right) \max \left( \left| {x}_{i} \right| - \lambda, 0 \right) $$ As @NicNic8 noted, this operation is called Soft Threshold.<|endoftext|> TITLE: When stating a theorem in textbook, use the word "For all" or "Let"? QUESTION [24 upvotes]: (Some report that my question is similar to another post. However, that post is talking about writing the "proof", rather than "stating" the theorem. "Proving" a theorem is NOT of the same structure and situation as "stating" a theorem. So this question is not duplicated to the other! Do not let it to be closed! And by the way, I'm also the OP of that question...) In writing a textbook, when we need to state a theorem that is a universal quantification, we can use the word "for all ..."(or equivalently "for every", "for any", "for arbitrary", "for each") or "let ...", Which of these ways is more ideal? Why? Although I think writing as "for all" is the more natural way to reflect the logical structure, that is a universal quantifier $\forall$, the popular style I have seen tends to use "let". Any theoretical aspect or experience is welcome. Example set 1. For all natural numbers $n$, if $n$ is even, then $n$ squared is even. Let $n$ be a natural number. If $n$ is even, then $n$ squared is even. Example set 2. Let $A,B$ be two sets. If for all $x\in A$, $x\in B$, then we say $A$ is a subset of $B$. For all pairs $A,B$ of sets, if for all $x\in A$, $x\in B$, then we say $A$ is a subset of $B$. Example set 3. Let $Y$ be a subspace of $X$. Then $Y$ is compact if and only if every covering of $Y$ by sets open in $X$ contains a finite subcollection covering $Y$. (Munkres Topology Lemma 26.1) For all subspaces $Y$ of $X$, $Y$ is compact if and only if every covering of $Y$ by sets open in $X$ contains a finite subcollection covering $Y$. Example set 4. For every $f:X\to Y$ being a bijective continuous function, if $X$ is compact and $Y$ is Hausdorff, then $f$ is a homeomorphism. (adapted by me, maybe ill-grammared?) For every bijective continuous function $f:X\to Y$, if $X$ is compact and $Y$ is Hausdorff, then $f$ is a homeomorphism. (adapted by me.) Let $f:X\to Y$ be a bijective continuous function. If $X$ is compact and $Y$ is Hausdorff, then $f$ is a homeomorphism. (Munkres Topology Theorem 26.6) New added example set 5 (I skipped the quantification on $E,f:E\to\mathbb{R},L,c$, just focus on the key part here.) If "$\forall\varepsilon>0,\exists\delta>0,\forall x\in E,0<|x-c|<\delta\rightarrow |f(x)-L|<\varepsilon$", then we say $f(x)$ converges to $L$ when $x$ approaches $c$. If, for all $\varepsilon>0$, there exists $\delta>0$ such that for all $x\in E$, if $0<|x-c|<\delta$ then $|f(x)-L|<\varepsilon$", then we say $f(x)$ converges to $L$ when $x$ approaches $c$. (Using "for all") If let $\varepsilon>0$, there is $\delta>0$, such that let $x\in E$, if $0<|x-c|<\delta$ then $|f(x)-L|<\varepsilon$", then we call $f(x)$ converges to $L$ when $x$ approaches $c$. (Using "let". I think this type is not natural. But I can't tell why.) REPLY [2 votes]: I might misunderstand your question, and I'm not a logician, but I can't resist to give an answer here. In my point of view it is more important to be clear and easy to read than to write a logically 100% correct statement. I think that in some of your examples, you should use neither "Let" or "For all", but rather use words, to make the statements easier to digest (I'm well aware, and respect that others think different). Suggestions: Example 1 The square of an even number is even. If it is not clear enough that this holds for all even numbers, then maybe: The square of every even number is again even. Example 2 We say that $A$ is a subset of $B$ if every element of $A$ also belongs to $B$. Example 4 Every bijective continuous mapping from a compact space to a Hausdorff space is a homeomorphism.<|endoftext|> TITLE: Why $\{x: x = x\}$ is not a set in Naive Set Theory? (Halmos, Sec. 4) QUESTION [8 upvotes]: I am reading Naive Set Theory by Paul Halmos. On Section 3 (Unordered Pairs), page 11, it is written that: As further examples, we note that $$\{x:x\neq x\} = \varnothing$$ and $$\{x:x= a\} = \{a\}.$$ In case $S(x)$ is $(x \in' x)$, or in case $S(x)$ is $(x=x)$, the specified $x$'s do not constitute a set. I understood that the problem with the existence of a set specified as $\{x:x \notin x\}$ is that if such a set $A$ exists, the statement $A \in A$ can be proven to be both true and false sentence. Still I can't understand why can't we specify a set of elements specified by a sentence $\{x: x = x\}$. PS. This is a cite of a SE Math question Notation on Set Theory. I quoted the original question and explicitly stated that part of it which was obvious to the original author, but I can't understand his explanation: The last sentence is not clear to me. Is it because, if $S(x)$ is $(x=x)$, this denotes the set with whichever $x$ I can imagine? that also seems to be stated as a suggestion and not as a final answer. REPLY [4 votes]: I agree with you that Halmos's sentence (page 11) : In case $S(x)$ is $(x∈′x)$, or in case $S(x)$ is $(x=x)$, the specified $x$'s do not constitute a set. is not crystal clear. Halmos has introduced the Specification axiom (page 6): for every $A$ and condition$S(x)$ the set $B$ whose elements... witten $B= \{ x \in A : S(x) \}$ exists. Then (same page) he proves, using the construction of Russell's paradox, that there is no "universal" set; i.e. he proves that: for every set $A$, there is a set $B$ such that $B \in' A$, specifically, this set is $B = \{ x \in A : x \in ' x \}$. Comment: here the definition of $B$ is correct, according to Specification. What we have to note is that, up to now (page 6), we do not know if there are sets at all. Specification is a "conditional" set existence axiom: if we have a set $A$, the axiom licences us to build new sets (subsets of $A$) for every "specifiable" condition $S(x)$. Then (page 8) we start populating the "universe" of sets, with the "temporary" axiom: there exists a set. In other axiomatization of the theory, the first "existential axiom" is usually the Empty set axiom. In Halmos approach, the existence of the empty set is proved: let $A_0$ the set whose existence has been asserted by the above axiom; then, by Specification, we have that: $\{ x \in A_0 : x \ne x \}$ exists. Clearly, this set is empty, and by Extensionality we have that it is unique (i.e. two empty sets must coincide) and thus we call it the emptyset : $\emptyset$. Third axiom (page 9) : Pairing. Again, it is "conditional" : for any two sets $a, b$ there exists a set that they both belong to : for every $a,b$, the set $A = \{ x = a \text { or } x = b \}$ exists; or: $x \in A \text { iff } x = a \text { or } x = b$. Call it $\{ a, b \}$. Comment : again, up to now we know only of $\emptyset$. Thus, applying Pair to it, what we get is nothing more than $\{ \emptyset \}$, and so on. In general (page 10), Pair applied to $a$ only (instead of : $a,b$) gives us $\{ a \}$ for every set $a$. And now we have the comments of page 10 (last half) and page 11: . We know that $\{ x : S(x) \}$ is not a correct way to create sets. In spite of this, Halmos consider some "good" cases : (i) let $S(x)$ the formula $x \in A$. In this case, the "bad" $\{ x : S(x) \}$ works, because it amounts to : $\{ x : x \in A \} = \{ x \in A : x \in A \}=A$ and this is a legitimate instance of Specification. Comment : of course, it is only a "typographical" usage because we cannot use it to prove nothing new. The set $A$ is "already there". (ii) The same with $x \ne x$ as $S(x)$. We have that: for every $A$ : $\{ x \in A : x \ne x \} = \emptyset$ exists, by Specification. In this way, we have "specified" the empty subset of every set $A$; but by extensionality they all coincide, and thus we do not care about $A$. Comment : maybe the "standard" approach of postulating directly the existence of $\emptyset$ is better... (iii) Also the "wrong" $\{ x : x = a \}$ does not harm. It is only a shorthand for the set $\{ x : x = a \text { or } x = a \}$ licensed by Pairing. And here we stop... Two further cases of $S(x)$ : $(x \in ' x)$ and $(x=x)$ cannot be used, because they lead to problems (discussed above). I hope it may help.<|endoftext|> TITLE: Why isn't there a field of mathematics that specifically studies nonlinear systems? QUESTION [5 upvotes]: There is linear algebra that is partially devoted to studying linear systems, vectors etc., but why isn't there such a developed field which focuses on nonlinear systems? I've managed to find this publication on the topic, but in the abstract it says "relatively new field". What are the greatest difficulties for development of that stream of mathematics? REPLY [4 votes]: There is one, and it is called "algebraic geometry".<|endoftext|> TITLE: Upper bound on $\displaystyle\sum_{\text{cyc}}\dfrac{a}{a^3+b^2+c}$ QUESTION [6 upvotes]: Let $a,b,c$ be positive reals such that $a+b+c=3$. Determine the largest possible value of $$\dfrac{a}{a^3+b^2+c}+\dfrac{b}{b^3+c^2+a}+\dfrac{c}{c^3+a^2+b}.$$ Experimenting for some values of $(a,b,c)$ I conjectured that the maximum value attained is $1$. But I am unable to prove this upper bound. Any hints or solutions are welcome. REPLY [3 votes]: Hint: Using Holder: $(a^3+b^2+c)(1+b+c)(1+1+c) \ge (a+b+c)^3$ to yield $$\frac{a}{a^3+b^2+c} \le \frac{a(1+b+c)(2+c)}{(a+b+c)^3}$$ and reduce the inequality to $5\sum ab + \sum a^2b + 3abc \le 21$ which is easy.<|endoftext|> TITLE: Show that the Gaussian binomial coefficient is a symmetric polynomial QUESTION [6 upvotes]: Deduce that ${{n} \brack {k}}_{q}$ is a symmetric polynomial of $q$, that is, if \begin{equation*} {{n}\brack {k}}_{q} = a_0 + a_1q + a_2q^2 + \ldots + a_Nq^N \end{equation*} with $a_N \neq 0$, then $a_i = a_{N-i}$ for all $i$. I'm having trouble proving this. Is this simply by symmetry of the Gaussian binomial coefficient (${n \brack k} = {n \brack n-k}$)? REPLY [3 votes]: Edit: I found a much more elementary proof of this fact, the old proof I had is below the line. A polynomial $p$ of degree $d$ is symmetric iff $p(x)=x^dp(x^{-1})$. The Gaussian binomial coefficient $\binom{n}{k}_q$ has degree $k(n-k)$, so the following proves it is symmetric: $$ \begin{align} q^{k(n-k)}\binom{n}{k}_{q^{-1}} &=q^{k(n-k)}\frac{(q^{-n}-1)(q^{-n+1}-1)\cdots(q^{-1}-1)}{(q^{-k}-1)\cdots(q^{-1}-1)(q^{-(n-k)}-1)\cdots (q^{-1}-1)}\\ &=\left(\frac{q^{n(n+1)/2}}{q^{k(k+1)/2}\cdot q^{(n-k)(n-k+1)/2}}\right)\cdot\frac{(q^{-n}-1)(q^{-n+1}-1)\cdots(q^{-1}-1)}{(q^{-k}-1)\cdots(q^{-1}-1)(q^{-(n-k)}-1)\cdots (q^{-1}-1)}\\ &=\frac{(1-q^{n})(1-q^{n-1})\cdots(1-q)}{(1-q^k)\cdots(1-q)(1-q^{n-k})\cdots (1-q)}=\binom{n}{k}_q \end{align} $$ You may recall that $\binom{n}{k}$ counts the number of lattice paths from $(0,0)$ to $(n-k,k)$ where every step is either up or right. This is because such a path can expressed as a string of $n$ letters, where $k$ of them are U for "up" and $n-k$ are R for "right." The coefficients of the polynomial $\binom{n}{k}_q$ also have a meaning related to lattice paths. Specifically: The $q^m$ coefficient of $\binom{n}{k}_q$ represents the numbers of lattice paths from $(0,0)$ to $(n-k,k)$ where the area under the path is equal to $m$. This is discussed in the wikipedia article. If you can prove the above assertion, the symmetry of $\binom{n}{k}_q$ follows, since rotating a lattice path $180^\circ$ is a bijection from lattice paths with an area of $m$ to lattice paths with an area of $k(n-k)-m$. You can prove this combinatorial interpretation by using the $q$-Pascal's identity $$ \binom{n}{k}_q=q^k\binom{n-1}{k}_q+\binom{n-1}{k-1}_q $$ and then showing that the number of lattice paths from $(0,0)$ to $(k,n-k)$ with an area of $m$ obeys a similar recurrnce.<|endoftext|> TITLE: Is the minimal conjunctive normal form for positive formula unique? If so, how do you calculate it? QUESTION [9 upvotes]: I am considering positive Boolean formulas (no negations). Take for example $A$. Here are two of its positive conjunctive normal forms. $$A$$ $$A \land (A \lor B)$$ The minimal example is $A$. Does every positive boolean formula have a unique minimal conjunctive normal form? If so, how does one calculate it? (I conjecture that you can do so by finding a positive conjuctive normal form, and then pruning any terms that are implied by other terms (for example, $A \lor B$ is implied by the previous term $A$, so it gives no additional information in a conjuction). I don't know how to prove that this is correct, if it is so. (It is also not very efficient.)) REPLY [2 votes]: This showed up on the Wikipedia Math Help Desk, see 1. It looks like the minimal expression is unique for positive expressions. Proof: Let S and T be two equivalent positive expressions in CNF which are both minimal. Let {Si} be the set of clauses in S and {Tj} be the set of clauses in T. Each Si and Tj, in turn, corresponds to a subset of a set of Boolean variables {xk}. Since S is minimal, no Si is contained in Sj for j≠i, and similarly for T. For each assignment a:xk → {T, F}, define Z(a) to be the set of variables for which a is F, i.e. Z(a) is the compliment of the support of a. A clause Si evaluates to F iff Si⊆Z(a) and the expression S evaluates to F iff Si⊆Z(a) for some i. A similar statements holds for T. Fix i and define the truth assignment ai(xk) to be T when xk is not in Si, in other words ai is the truth assignment so that Z(ai) = Si. The clause Si evaluates to F under this assignment, so S evaluates to F. But S and T are equivalent so T evaluates to F. Therefore Tj⊆Z(ai)= Si for some j. Similarly, for each j there is k so that Sk ⊆ Tj. (I think another way of saying this is that S and T are refinements of each other.) If Si is an element of S, then there is Tj in T so that Tj ⊆ Si, and there is an Sk so that Sk ⊆ Tj. Then Sk ⊆ Si and so, since ''S'' is minimal, i=k. We then have Si ⊆ Tj ⊆ Si, Si = Tj ∈ T. So S ⊆ T and similarly T ⊆ S, therefore S = T. Another (probably better) approach is to characterize the clauses that appear.<|endoftext|> TITLE: Does this group action construction have a name? QUESTION [7 upvotes]: Let $G \curvearrowright X$ be a group action. Then $G \curvearrowright X \times X$ through $g \cdot (x, y) = (g \cdot x, g \cdot y)$. I am interested if this 'diagonal' action and its orbits have a special name. REPLY [4 votes]: If I remember correctly, your guess "diagonal action" is a common name for this kind of group action.<|endoftext|> TITLE: Formulas for different permutation/combination scenarios QUESTION [7 upvotes]: I was trying to develop formulas for different permutation/combination scenarios, but I could not sort out last three cases. Please check these following cases - Unique items Repetition: no Permutation: $_nP_r$ Combination: $_nC_r$ Repetition: yes Permutation: $n^r$ Combination: $_{n+r-1}C_r$ or $_{n+r-1}C_{n-1}$ Non-unique items Repetition: no Permutation: $\displaystyle \frac{n!}{k_1!k_2!\cdots k_n!}$, where $k_1,k_2,\dots k_n$ are numbers of non-unique items Combination: ?? Repetition: yes Permutation: ?? Combination: ?? REPLY [10 votes]: Closely related with your question is a somewhat more general consideration of fundamental counting techniques called The twelvefold way R.P. Stanley presents in his classic Enumerative combinatorics vol. 1 in section 1.9 the so-called twelvefold way. He considers finite sets $N$ and $X$, with $|N|=n, |X|=x$ and counts the number of different functions $f:N\rightarrow X$ under different situations. Functions: $f$ may be arbitrary, injective or surjective giving three different possibilities. Sets: Elements of $N,X$ may be either distinguishable or indistinguishable resulting in four different possibilities. Altogether we can consider $3\cdot 4=12$ different situations: \begin{array}{ll|ccc} \text{elements }N&\text{elements }X&\quad\text{any }f\quad&\quad\text{injective }f\quad&\quad\text{surjective } f\quad\\ \hline \text{dist.}&\text{dist.}&x^n\quad&\quad x^{\underline{n}}\quad&x!{n\brace x}\\ \text{indist.}&\text{dist.}&\left(\!\!{x\choose n}\!\!\right)\quad&\quad\binom{x}{n}\quad&\left(\!\!{x\choose n-x}\!\!\right)\quad\\ \text{dist.}&\text{indist.}&\sum_{j=0}^x{n\brace j}\quad&\quad\begin{matrix}1&\text{if }n\leq x\\0&\text{if }n>x\end{matrix}\quad&{n\brace x}\quad\\ \text{indist.}&\text{indist.}&\sum_{j=0}^xp_j(n)\quad&\quad\begin{matrix}1&\text{if }n\leq x\\0&\text{if }n>x\end{matrix}\quad&p_x(n)\quad\\ \end{array} with $\qquad x^{\underline{n}}=x(x-1)\cdots(x-n+1)$ the falling factorial of $x$, $\qquad x!=x(x-1)\cdots 3\cdot2\cdot1$ the factorial of $x$, $\qquad \binom{x}{n}=\frac{x!}{n!(x-n)!}$ the binomial coefficient $x$ choose $n$, $\qquad \left(\!\!{x\choose n}\!\!\right)=\binom{x+n-1}{n}$ the number of multisets $x$ multichoose $n$. $\qquad {n\brace x}$ the Stirling numbers of second kind and $\qquad p_x(n)$ the number of partitions of $n$ into $x$ parts. A presentation in terms of urns and balls can be found here. [2016-10-15] Add-on: Some information regarding properties of functions and sets added due to a comment of OP. A function $f:N\rightarrow X$ is said to be arbitrary or non-restrictive if there is no specific restriction given injective or one-to-one if each element of $X$ is the image of at most one element of $N$ surjective or onto if each element of $X$ is the image of at least one element of $N$ Examples: Let's take a look at some functions with respect to these properties: \begin{array}{l|ccc} \text{function}&arbitrary&injective&surjective\\ \hline \\ f:\{1,2,3\}\rightarrow\{a,b,c,d\}&\mathbb{\color{blue}{\text{yes}}}&-&-\\ f(1)=f(2)=c,f(3)=a\\ \\ g:\{1,2,3\}\rightarrow\{a,b,c,d\}&\mathbb{\color{blue}{\text{yes}}}&\mathbb{\color{blue}{\text{yes}}}&-\\ g(1)=d,g(2)=c,g(3)=a\\ \\ h:\{1,2,3,4\}\rightarrow\{a,b,c\}&\mathbb{\color{blue}{\text{yes}}}&-&\mathbb{\color{blue}{\text{yes}}}\\ h(1)=h(4)=c,h(2)=a,h(3)=b\\ \\ i:\{1,2,3,4\}\rightarrow\{a,b,c,d\}&\mathbb{\color{blue}{\text{yes}}}&\mathbb{\color{blue}{\text{yes}}}&\mathbb{\color{blue}{\text{yes}}}\\ i(1)=d,i(2)=c,i(3)=a,i(4)=b\\ \end{array} Balls and boxes We think of $N=\{1,2,3\}$ as a set of balls and of $X=\{a,b,c,d\}$ as a set of boxes. A function $f:N\rightarrow X$ is considered as placing each ball into some box. We consider four functions $j,k,l,m: N\rightarrow X$ by \begin{array}{lclcllcl} j(1)&=&j(2)&=&a,&\qquad j(3)&=&b\\ k(1)&=&k(3)&=&a,&\qquad k(2)&=&b\\ l(1)&=&l(2)&=&b,&\qquad l(3)&=&d\\ m(2)&=&m(3)&=&b,&\qquad m(1)&=&c\\ \end{array} Four functions with distinguishable balls and boxes:                                   with balls indistinguishable:                                   with boxes indistinguishable:                                   with balls and boxes indistinguishable:<|endoftext|> TITLE: Expectation of gradient in stochastic gradient descent algorithm QUESTION [10 upvotes]: I'm studying stochastic gradient descent algorithm for optimization. It looks like this: $L(w) = \frac{1}{N} \sum_{n=1}^{N} L_n(w)$ $w^{(t+1)} = w^{(t)} - \gamma \nabla L_n(w^{(t)})$ I assume that $n$ is chosen randomly each time the algorithm iterates. (¿?) The problem comes when my notes state that $E[\nabla L_n(w)] = \nabla L(w)$. Where does this come from? REPLY [11 votes]: Let's assume we are talking about stochastic gradient descent where we update the weights based on a single example (not minibatch), out of a total data set of size $N$. The total error over the whole set is given by: $$ L(w) = \frac{1}{N}\sum\limits_{n=1}^{N} L_n(w) $$ Then, at every step, a random sample point $n\sim U$ is chosen, and we update the weights via: $$ w \leftarrow w - \gamma\nabla L_n(w) $$ where $U$ means uniform over the data set. Now we want to know whether $\mathbb{E}_{n\sim U}[\nabla L_n(w)]=\nabla L(w)$. We show this as follows: \begin{align} \mathbb{E}_{n\sim U}[\nabla L_n(w)] &= \nabla\; \mathbb{E}_{n\sim U}[ L_n(w)] \\ &= \nabla \sum\limits_{i=1}^N P(n=i) L_i(w)\\ &= \nabla \frac{1}{N}\sum\limits_{i=1}^{N} L_i(w)\\ &= \nabla L(w) \end{align} The first step is probably the nastiest (although not in the discrete case I guess), but we can interchange the gradient and expectation assuming $L$ is sufficiently smooth and bounded (which it probably is). See here and here. The other steps are just the definition of discrete expectation (but should still work assuming continuous spaces as well).<|endoftext|> TITLE: Prerequisites for book "mirror symmetry and algebraic geometry" by Cox and Katz QUESTION [6 upvotes]: As the title suggest, I am trying to read the book mentioned, but I find that it uses a lot of material that I don't know yet. For example, it uses toric geometry and polytopes, topics that I've never seen in regular courses at my university. So, I want to know from the experience of someone who has used the book, what are the prerequisites of algebraic geometry to understand the text. Is it necessary to know the language of schemes? Are schemes used at all? My background is very modest, being a course in classic algebraic geometry, at the level of Fulton's book "Algebraic curves" and almost the second chapter of Hartshorne's "Algebraic Geometry". I found the book too diffuse, as to the range of topics used is refered. If someone can answer my questions and give some suitable references (in terms of what is needed in the book) for the necessary background, I'll be very grateful. The thing is, I barely can see what is being done with the algebraic geometry in the text, and I'm looking for an "scheme theoretic" point of view of the topic. REPLY [5 votes]: I was reading it a year ago. It's important that you have a little experience with Hodge decomposition, Gauss-Manin connection and Kahler geometry in general (there are two volumes of Claire Voisin which provide the necessary background. Here's a review). Also you need to understand moduli space of algebraic curves (read for examples Morris and Harrison's "Moduli of curves". Virtual fundamental class is a stack-theoretic construction and thus has some relationship to schemes but I don't think it's important for a first read- so schemes are not important). Fulton has a book on toric varieties, you can also look at Batyrev's original work. Also you need to understand group actions on varieties and orbifold construction- I'm not really sure about reference for that. As for physical side (string theory or QFT) it is not really important but you can look through Clay monograph's physics chapters.<|endoftext|> TITLE: A positive integer gets reduced by 9 times when one of its digits is deleted.... QUESTION [8 upvotes]: A positive integer gets reduced by 9 times when one of its digits is deleted and the resultant number is divisible by 9. Prove that to divide the resultant number by 9, it is again sufficient to delete one of it's digits. Find all such numbers. I am completely clueless as to how this question can be solved. I require a hint to start solving this. Note: The only thing I can think of is that the number deleted the first time is either 0 or 9. According to the divisibility rule the sum of the digits should be a multiple of 9 if the no. is divisible by 9. If the sum of the digits of both the 1st number and the 2nd number are a multiple of 9 then the deleted digit is surely 9 or 0. let the 3 nos. be $a,b,c$. $a=9b$ $b=9c$ Therefore, $81|a$ REPLY [4 votes]: Write the first number as $10^{n+1}a+10^nb+c$ where $b$ is the digit that will be deleted, $c$ has $n$ digits, and $a$ can have multiple digits. We are told that $10^{n+1}a+10^nb+c=9(10^na+c)$ with $10^{n-1} \le c \lt 10^n$. This gives $8c=10^n(a+b)$, which shows $a+b \le 7$. The fact that deleting a digit does not spoil the divisibility by $9$ shows that $b=0$ as $b=9$ is prohibited. If we take $a=1,b=0$ we find the number to be $10125$ with as many trailing zeros as desired. Similarly we find the solutions $2025,30375,405,50626,6025,70875$, all of which can be multiplied by $10^k$. You delete the second digit $0$ to do the first division by $9$ and the first digit for the second division by $9$.<|endoftext|> TITLE: Prove $\csc(x)=\sum_{k=-\infty}^{\infty}\frac{(-1)^k}{x+k\pi}$ QUESTION [10 upvotes]: Prove $$\csc(x)=\sum_{k=-\infty}^{\infty}\frac{(-1)^k}{x+k\pi}$$ Hardy uses this fact without proof in a monograph on different ways to evaluate $\int_0^{\infty}\frac{\sin(x)}{x} dx$. REPLY [2 votes]: We begin by expanding the function $\cos(xy)$ in a Fourier series, $$\cos(xy)=a_0/2+\sum_{k=1}^\infty a_k\cos(ky) \tag1$$ for $x\in [-\pi/\pi]$. The Fourier coefficients $(1)$ are given by $$\begin{align} a_k&=\frac{2}{\pi}\int_0^\pi \cos(xy)\cos(ky)\,dy\\\\ &=\frac1\pi (-1)^k \sin(\pi x)\left(\frac{1}{x +k}+\frac{1}{x -k}\right)\tag2 \end{align}$$ Substituting $(2)$ into $(1)$, setting $y=0$, and dividing by $\sin(\pi x)$ reveals $$\begin{align} \pi \csc(\pi x)&=\frac1y +\sum_{n=1}^\infty (-1)^k\left(\frac{1}{x -k}+\frac{1}{x +k}\right)\\\\ &=\sum_{k=-\infty}^\infty \frac{(-1)^k}{x-k}\\\\ &=\sum_{k=-\infty}^\infty \frac{(-1)^k}{x+k}\tag3 \end{align}$$ Finally, enforcing the substitution $x\to x/\pi$ and dividing by $\pi$ in $(3)$ yields the coveted result $$\csc(x)=\sum_{k=-\infty}^\infty \frac{(-1)^k}{x+k\pi}$$<|endoftext|> TITLE: Homology groups of the Mapping Torus QUESTION [6 upvotes]: Question 2.2.30 of Hatcher: For the mapping torus $T_f$ of a map $f: X \to X$, we constructed in Example 2.48 a long exact sequence $\cdots \rightarrow H_n(X) \xrightarrow{ 1 - f_{\ast} } H_n(X) \longrightarrow H_n(T_f) \longrightarrow H_{n-1}(X) \longrightarrow \cdots.$ Use this to compute the homology of the mapping tori of the following maps: (a) A reflection $S^2 \to S^2$. So obviously $$H_n(S^2) = \begin{cases} \mathbb{Z}, & n=0,2 \\ 0, & \text{else}. \end{cases}$$ Moreover, we have that in this case, $$T_f = \frac{S^2 \times I}{(x,0) \sim (-x,1)}.$$ I'm unsure of how to proceed, particular due to the fact that I'm unclear as to what the map $1-f_{ast} : H_n(X) \to H_n(X)$ is defined to be. REPLY [3 votes]: I think the answer is: $$ H_3 (T_f) = 0, \qquad H_2 (T_f) = \mathbb Z_2, \qquad H_1 (T_f) = \mathbb Z, \qquad H_0 (T_f) = \mathbb Z. $$ The only thing we have to care about is finding the effect of $\ f_*\ $ on the homology groups $\quad H_2 (S^2) = \mathbb Z \quad $ and $\quad H_0 (S^2) = \mathbb Z. \quad $ Now the $0$-th induced morphism is just the identity: of course $f$ maps the sphere to itself, and we know that the $0$-th homology group is just the free $\mathbb Z$-module generated by the connected components of our space (cfr. E.H. Spanier "$Algebraic \ Topology$" Chapter 4, pag 155). So, in our exact sequence $$ H_1 (S^2)\longrightarrow H_1 (T_f) \longrightarrow H_0 (S^2)\xrightarrow{ 1 - f_* } H_0 (S^2) \longrightarrow H_0 (T_f) \longrightarrow 0 $$ we can substitute $$ H_1 (S^2) = 0, \qquad H_0 (S^2) = \mathbb Z, \qquad 1-f_* = 0, \qquad $$ to obtain $$ 0 \longrightarrow H_1 (T_f) \longrightarrow \mathbb Z \xrightarrow{\ \ 0 \ \ } \mathbb Z \longrightarrow H_0 (T_f) \longrightarrow 0. $$ Hence, $$ H_1 (T_f) = \mathbb Z, \qquad H_0 (T_f) = \mathbb Z. $$ It remains to see $$ ... \longrightarrow H_3 (S^2)\longrightarrow H_3 (T_f) \longrightarrow H_2 (S^2)\xrightarrow{ 1 - f_* } H_2 (S^2) \longrightarrow H_2 (T_f) \longrightarrow H_1 (S^2) \longrightarrow ... $$ Of course $$ H_3 (S^2) = H_1 (S^2) = 0. $$ We need to know what is $$f_*: H_2 (S^2)\rightarrow H_2 (S^2). $$ Now $\quad 1 \in H_2 (S^2) \quad $ can be tought of as the orientation class of $S^2$ (see, ad example https://en.wikipedia.org/wiki/Orientability#Homology_and_the_orientability_of_general_manifolds). For $n$ even the antipodal map is orientation-reversing (M. P. Do Carmo $Riemannian \ Manifolds$, Chpt 0, page 20), and, since it is a cover of order one, it has to induce the map $-1$ in homology. This is also directly stated in E.H. Spanier "$Algebraic \ Topology$" (Chapter 4, pag 196, section 7, points 9-10), which should be a credible enough reference. So, since $\ 1 - (-1) = 2\ $ (just kidding), and $\ H_2 (S^2) = \mathbb Z $ $$ 0 \longrightarrow H_3 (T_f) \longrightarrow \mathbb Z\xrightarrow{\quad 2 \times \quad } \mathbb Z \longrightarrow H_2 (T_f) \longrightarrow 0 $$ we have $H_3 (T_f) = \ker (2 \times ) = 0 \ $ and $\ H_2 (T_f) = \mathbb Z / 2 \mathbb Z = \mathbb Z_2.$ A couple comments. The result is quite clear: $f$ is orientation-reversing, so one should expect $T_f$ to be a non-orientable manifold, i. e. $H_3 (T_f) = 0. \ $ $H_0$ and $H_1$ are obvious, since the mapping torus is a $S^2$-fibration over $S^1$. As for $H_2$, you see that two copies of $S^2$ will cancel each other in $T_f$: just think one as the inverted copy of the other. Second thing: if you want to learn algebraic topology, you really have to study on the textbook by E. H. Spanier. It's old but gold.<|endoftext|> TITLE: If $\{x,y,z\}\subset[-1,1]$ and $x+y+z=0$ so $\sum\limits_{cyc}\sqrt{1+x+\frac{y^2}{6}}\leq3$ QUESTION [18 upvotes]: Let $\{x,y,z\}\subset[-1,1]$ such that $x+y+z=0$. Prove that: $$\sqrt{1+x+\frac{y^2}{6}}+\sqrt{1+y+\frac{z^2}{6}}+\sqrt{1+z+\frac{x^2}{6}}\leq3$$ I tried C-S, but without success. REPLY [2 votes]: @cafaxo gave a nice method to eliminate the root signs. Let $\lambda_1 = \frac{2+x}{6}, \ \lambda_2 = \frac{2+y}{6}, \ \lambda_3 = \frac{2+z}{6}.$ It holds that $\lambda_1, \lambda_2, \lambda_3 > 0; \ \lambda_1 + \lambda_2 + \lambda_3 = 1.$ Note that $t\mapsto \sqrt{t}, \ t \ge 0$ is concave. @cafaxo obtained \begin{align} &\sqrt{1 + x + \frac{y^2}{6}} + \sqrt{1 + y + \frac{z^2}{6}} + \sqrt{1 + z + \frac{x^2}{6}}\\ =\ & \lambda_1 \sqrt{\frac{1 + x + \frac{y^2}{6}}{\lambda_1^2}} + \lambda_2 \sqrt{\frac{1 + y + \frac{z^2}{6}}{\lambda_2^2}} + \lambda_3 \sqrt{\frac{1 + z + \frac{x^2}{6}}{\lambda_3^2}}\\ \le \ & \sqrt{\frac{1 + x + \frac{y^2}{6}}{\lambda_1} + \frac{1 + y + \frac{z^2}{6}}{\lambda_2} + \frac{1 + z + \frac{x^2}{6}}{\lambda_3}}\\ = \ & \sqrt{\frac{6 + 6x + y^2}{2+x} + \frac{6 + 6y + z^2}{2+y} + \frac{6 + 6z + x^2}{2+z}}. \end{align} It suffices to prove that $$\frac{6 + 6x + y^2}{2+x} + \frac{6 + 6y + z^2}{2+y} + \frac{6 + 6z + x^2}{2+z} \le 9$$ or (noting that $z = -x-y$) \begin{align} &x^4+2 x^3 y+3 x^2 y^2+2 x y^3+y^4-2 x^3+9 x^2 y+15 x y^2+2 y^3\\ &\quad +4 x^2+4 x y+4 y^2 \ge 0 \tag{1} \end{align} for $x, y\in [-1, 1]; \ -1\le x + y \le 1.$ My solution Let me give a different method to prove (1). With computer, here is a SOS (Sum of Squares) solution: (1) is true since (note: $A_1, A_2, A_3, A_4$ are all positive semidefinite) \begin{align} &x^4+2 x^3 y+3 x^2 y^2+2 x y^3+y^4-2 x^3+9 x^2 y+15 x y^2+2 y^3+4 x^2+4 x y+4 y^2\\ =\ & \frac{1}{60}\Big[u^TA_1u + (1-x)u^TA_2u + (1-y)u^TA_3u + (x+y+1)u^TA_4u\Big] \end{align} where $$u = \left(\begin{array}{c} x\\ y\\ x^2\\ xy\\ y^2 \end{array}\right), $$ $$ A_1 = \left(\begin{array}{ccccc} 140 & 80 & -30 & 165 & 120\\ 80 & 150 & 55 & 230 & 51\\ -30 & 55 & 60 & 60 & -40\\ 165 & 230 & 60 & 380 & 120\\ 120 & 51 & -40 & 120 & 108 \end{array}\right), \quad A_2 = \left(\begin{array}{ccccc} 80 & 0 & 0 & 0 & 40\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 40 & 0 & 0 & 0 & 20 \end{array}\right), $$ $$A_3 = \left(\begin{array}{ccccc} 0 & 0 & 0 & 0 & 0\\ 0 & 10 & 0 & 0 & 14\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 14 & 0 & 0 & 20 \end{array}\right), \quad A_4 = \left(\begin{array}{ccccc} 20 & 40 & 0 & 0 & -20\\ 40 & 80 & 0 & 0 & -40\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ -20 & -40 & 0 & 0 & 20 \end{array}\right).$$<|endoftext|> TITLE: How prove this equation has only one solution $\cos{(2x)}+\cos{x}\cdot\cos{(\sqrt{(\pi-3x)(\pi+x)}})=0$ QUESTION [10 upvotes]: Let $x\in (0,\dfrac{\pi}{3}]$. Show that this equation $$\cos{(2x)}+\cos{x}\cdot\cos{(\sqrt{(\pi-3x)(\pi+x)}})=0$$ has a unique solution $x=\dfrac{\pi}{3}$ I try to the constructor $$f(x)=\cos{(2x)}+\cos{x}\cdot\cos{(\sqrt{(\pi-3x)(\pi+x)}})=0\, , \quad\quad f(\dfrac{\pi}{3})=0$$but I use found this function is not a monotonic function see wolframpha Now the key How to prove this function $f(x)$ in $(0,\frac{\pi}{3})$ has no solution, since $$f\left(\frac{\pi}{6}\right)=\dfrac{1}{2}+\dfrac{1}{\sqrt{3}}\cos{\left(\dfrac{1}{2}\sqrt{\dfrac{7}{3}}\pi\right)}=-0.138\cdots<0$$ in other words, how to prove that $$f(x)<0,\forall x\in(0,\dfrac{\pi}{3}) \, .$$ REPLY [2 votes]: Let $f$ be defined on $(0,\dfrac{\pi}{3}]$ such that $$f(x)=\cos(2x)+\cos(x)\cdot\cos\left(\sqrt{(\pi-3x)(\pi+x)}\right)$$ It is immediate that the domain of the function could be $-\pi\le x\le\dfrac{\pi}{3}$ and that $f(-\pi)=f(-\frac{2\pi}{3})=f(0)=f(\frac{\pi}{3})=0$ (which is easily get from the radical) but the three values $x=-\pi,-\frac{2\pi}{3},0$ has been discarded by convention of the considered domain. It remains to verify that the only point of $(0,\dfrac{\pi}{3}]$ such that $f(x)=0$ is $x=\dfrac{\pi}{3}$. Because of $\cos (2x)=2\cos^2(x)-1$ we have $$f(x)=\cos(x)\left(2\cos(x)+\cos\left(\sqrt{(\pi-3x)(\pi+x)}\right)\right)-1$$ Consider the function defined on $0\lt x\lt\dfrac{\pi}{3}$ $$h(x)= 2\cos(x)+\cos\left(\sqrt{(\pi-3x)(\pi+x)}\right)$$ Taking derivative $$h’(x)=-2\sin x+\frac{(\pi+3x)\sin(\sqrt{(\pi-3x)(\pi+x)})}{\sqrt{{(\pi-3x)(\pi+x)}}}$$ It follows $$\begin{cases}h'(x)\lt0\space \text{for }0\lt x\lt 0.3711\Rightarrow h(x)\text{ is decreasing in the interval}\space (0,0.3711)\\h(0.3711)\approx0.9734\text{ is a minimun of h and }\cos(0.3711)\approx0.931929\\ h’(x)\gt 0\text{ for } x\gt0.3711;\space\space h(x_0)=1\text { for } x_0\approx0.538\text { and }\cos(0.538)\approx 0.858735 \end {cases}$$ Hence $h(x)\lt1$ for $0\lt x\lt0.538$ so $$\cos(x)h(x)\lt1\text{ on } 0\lt x\lt0.538$$ Consequently $$\color{red}{ f(x)=\cos(x)h(x)-1\lt0\text{ on } 0\lt x\lt0.538}\qquad(*)$$ It remains to prove the inequality for $0.538\lt x\lt \dfrac{\pi}{3}$ $$\begin{cases}h'(x)\lt0\text{ when }\space 0.3711\lt x\lt\dfrac{\pi}{3}\Rightarrow h(x)\text{ increasing on }\space (0.538,\space \dfrac{\pi}{3})\\h(0.538)\approx 1\text{ and }h(\dfrac{\pi}{3})=2\end{cases}$$ Consider now the function $k(x)=\cos(x)h(x)$. Taking into account that in the interval $I=(0.538,\space \dfrac{\pi}{3})$ the function $h(x)$ is increasing but $\cos(x)$ is decreasing from $\cos(0.538)\approx 0.858735$ till $\cos(\dfrac{\pi}{3})=\dfrac 12$ we can not conclude that k (x) is increasing in all the interval $I$. Anyway we calculate the minimun of $k(x)$ on $I$; this corresponds to a unique root of the equation $\cos(x)h'(x)=\sin(x)h(x)$ which is $x_0\approx 0.6439$ giving the minimun $k(x_0)\approx0.8481\gt0.$ It follows $$k(x)\ge k(x_0)\gt 0\text{ in the interval }[0.6439,\space\dfrac{\pi}{3})$$ hence $$0.8441\le k(x)\lt k\left(\dfrac{\pi}{3}\right)=\cos\left(\dfrac{\pi}{3}\right)h\left(\dfrac{\pi}{3}\right)=\dfrac 12\cdot 2=1$$ Consequently $$\color{red}{f(x)=k(x)-1\lt 0\text { on }[0.6439,\space\dfrac{\pi}{3})}\quad(**)$$ Thus, by $(*)$ and $(**)$, $\dfrac{\pi}{3}$ is the only root of $f(x)$ such that $0\lt x\le \dfrac{\pi}{3}$.<|endoftext|> TITLE: distinguishable balls distinguishable boxes where each box contains at least 2 balls QUESTION [5 upvotes]: Consider $m$ distinguishable balls and $n$ distinguishable boxes where $m > n$ (the boxes and balls are already distinguishable, say they come with preassigned distinct labels). How many ways are there to distribute the balls into boxes such that each box contains at least $2$ balls? REPLY [2 votes]: We solve the case of indistinguishable boxes and distinguishable boxes. The combinatorial species in the first case is $$\mathfrak{P}_{=n}(\mathfrak{P}_{\ge 2}(\mathcal{Z}))$$ which gives the EGF $$G(z) = \frac{(\exp(z)-z-1)^n}{n!}.$$ Extracting coefficients we get $$m! [z^m] G(z) = m! [z^m] \frac{(\exp(z)-z-1)^n}{n!} \\ = \frac{m!}{n!} [z^m] \sum_{k=0}^n {n\choose k} (\exp(z)-1)^k (-1)^{n-k} z^{n-k} \\ = \frac{m!}{n!} \sum_{k=0}^n {n\choose k} [z^{m+k-n}] (\exp(z)-1)^k (-1)^{n-k} \\ = \frac{m!}{n!} \sum_{k=0}^n {n\choose k} (-1)^{n-k} \times \frac{k!}{(m+k-n)!} {m+k-n\brace k} \\ = {m\choose n} \sum_{k=0}^n {n\choose k} (-1)^{n-k} \times \frac{k! (m-n)!}{(m+k-n)!} {m+k-n\brace k} \\ = {m\choose n} \sum_{k=0}^n {n\choose k} (-1)^{n-k} \times {m+k-n\brace k} {m+k-n\choose k}^{-1}.$$ We can verify this for some special values like $m=2n$ where we obtain $$(2n)! [z^{2n}] \frac{(\exp(z)-z-1)^n}{n!} \\ = (2n)! \frac{1}{n!} \frac{1}{2^n} = \frac{1}{n!} {2n\choose 2,2,2,\ldots,2}$$ which is the correct value. We also get $$(2n+1)! [z^{2n+1}] \frac{(\exp(z)-z-1)^n}{n!} = (2n+1)! \frac{1}{n!} {n\choose 1} \frac{1}{6} \frac{1}{2^{n-1}} \\ = \frac{1}{(n-1)!} {2n+1\choose 2,2,2,\ldots,2,3}$$ which is correct as well. One more example is $$(2n+2)! [z^{2n+2}] \frac{(\exp(z)-z-1)^n}{n!} \\ = (2n+2)! \frac{1}{n!} \left({n\choose 1} \frac{1}{24} \frac{1}{2^{n-1}} + {n\choose 2} \frac{1}{6^2} \frac{1}{2^{n-2}}\right) \\= \frac{1}{(n-1)!} {2n+2\choose 2,2,2,\ldots,2,4} + \frac{1}{2} \frac{1}{(n-2)!} {2n+2\choose 2,2,2,\ldots 2,3,3}.$$ Finally observe that we get the values for distinguishable boxes by multiplying by $n!$ because the species now becomes $$\mathfrak{S}_{=n}(\mathfrak{P}_{\ge 2}(\mathcal{Z})).$$<|endoftext|> TITLE: What interior design features involving math or logic are good ideas? QUESTION [5 upvotes]: What interior design features that involve mathematics or formal logic, or things related to mathematics or formal logic (things about the subjects count as well) are good ideas? REPLY [5 votes]: In 1975 I worked for about eight months for a place that made custom wood floors. When I realized I was returning to college, I gave the boss a number of sketches of floor designs. The one I know he used was this: There is very little waste with this design, as the raw material comes as wood strips of constant width. As always, one may place strips of different color between the hexagons (which should all be the same type as far as number of pieces). There is skil involved: it turned out to work best to construct the hexagons in the shop on the very very thin plywood, (just one ply, we called it luan I think), as it would take forever for workers to place the strips at the job site. Completed hexagons, glued onto luan, could be cut as needed. http://homeguides.sfgate.com/luan-wood-99466.html Later I made a tabletop out of strips of walnut with this: I did not do a good job. Oh, when I got to Berkeley, I went to the public library, Art and Architecture room. I thought I would find the hexagon design in basketweaving, but that was not it. It has been used in mosaic design, very long ago. Finally, because of the mystical significance of the number 42, I like this solid. Truncated rhombic triacontahedron, also called chamfered dodecahedron. The hexagons are not regular: they result from taking a rhombus with diagonal ratio the golden ratio, then truncating the two more pointy vertices to get a not quite regular hexagon. There is a design place right around the corner from me, they seem to have lots of ceramic dodecahedra with glossy glazes. I made this out of Zometool pieces, maybe I will show them some day. However, they do not manufacture the decorative stuff they sell... I put in lots of internal structure, an icosahedron with radii in the center, then... The pedestal is one of those cylindrical boxes of Quaker Oats.<|endoftext|> TITLE: What is the goal of harmonic analysis? QUESTION [24 upvotes]: I am taking a basic course in harmonic analysis right now. Going in, I thought it was about generalizing Fourier transform / series: finding an alternative representation of some function where something works out nicer than it did before. Now, having taken the first few weeks of this, it is not at all about Fourier analysis but about the Hardy-Littlewood-Maximal-operator, interpolation theorems, Stein's theorem/lemma and a lot of constants which we try to improve constantly in some bounds. We are following Stein's book on singular integrals, I guess. Can anyone tell me where this is leading? Why are we concerned with this kind of operators and in which other areas are the results helping? REPLY [2 votes]: Ultimately it helps one to prove theorems (like existence and uniqueness) of partial differential equations.<|endoftext|> TITLE: Milk and Tea problem QUESTION [6 upvotes]: There are two cups on a table. One is filled with tea, the other with milk. If we take a spoon of tea from the first cup and place that into the cup with milk and stir and then do the same but vice versa. Which was there more of in the end? Tea in the cup of milk or milk in the cup of tea? REPLY [2 votes]: I think also the two cups may have different sizes and still, the amount of tea in the milk cup is the same as the amount of milk in the cup of tea. Consider the following: The volume of the spoon is $V$, the amount of the tea transferred from the cup of tea into the cup of milk is $T_1$ (first spoon). So $$V=T_1$$Then you transfer a mixture of tea and milk from the cup of the mixture to the cup of tea (second spoon) where: $$V=T_2+M$$ Where $T_2$ and $M$ are the amount of tea and milk in the second spoon respectively. In the end, you will have $$(T_2+M)=T_1=V$$ So: The amount of milk in the cup of tea is $M$ The amount of tea in the second cup is $T_1-T2$ (the amount of tea removed from the cup of tea) which equals to $M$ Please correct me if I am wrong.<|endoftext|> TITLE: Which topics and textbooks to learn elementary school arithmetic and beyond? QUESTION [6 upvotes]: I'm an adult who tries to learn math from the ground up in his free time. I've decided to start with arithmetic, but I don't know where to go next. I'm a type of person who learns by reading. I want to understand math, not to memorize it. What topics (starting from arithmetic and on) to study? In what sequence? Please recommend textbooks. I know some very very basic things, such as addition, subtraction, and how to find square roots. My foundation in math is very poor. I can say, safely, that till yesterday, I didn't know what ones, tens, and hundreds are, what place value is, what points and coordinates are and a lot more. REPLY [2 votes]: Two possibly useful old books that I've come across in a nearby library are: Aaron Bakst (1900-1962), Arithmetic for Adults. A Review of Elementary Mathematics, F. S. Crofts and Company, 1944, viii + 319 pages. one amazon.com review Burdette Ross Buckingham (1876-1962), Elementary Arithmetic. Its Meaning and Practice, Ginn and Company, 1947, viii + 744 pages. Interestingly, both seem to be freely available (legally also) on the internet. If you don't like lengthy on-screen reading, you can print out the pages (might have to do it one page at a time) or try to obtain a copy using interlibrary loan at your public library. Regarding your question about what to study after arithmetic for high school mathematics, the subjects would be (elementary) algebra (typically a two year sequence), geometry, trigonometry, and precalculus (often includes trigonometry). Rather than worry about textbooks to use after arithmetic at this time, I would recommend that you focus on the task at hand -- arithmetic. In general, you'll find that the more you know, the more your previous plans wind up being changed because you begin to develop a better understanding for what approaches you like and you're better able to pick out books that function best for you (at these later times). I learned a lot of math on my own (most of high school math and all of the 3-4 semester college calculus sequence, and much of elementary linear algebra) simply by picking books from libraries and bookstores that I liked and that were books I felt I could learn from. The more I learned, the more I found that my own choices tended to be better for me than what others might have suggested, which often tended to be little more than what textbook they learned from rather than a reasoned choice from the thousands of available textbooks.<|endoftext|> TITLE: Why do positive definite matrices have to be symmetric? QUESTION [16 upvotes]: Definitions of positive definiteness usually look like this: A symmetric matrix $M$ is positive definite if $x^T M x > 0$ for all vectors $x \neq 0$. Why must $M$ be symmetric? The definition seems to make sense for general square matrices. REPLY [17 votes]: Let quadratic form $f$ be defined by $$f (\mathrm x) := \mathrm x^\top \mathrm A \,\mathrm x$$ where $\mathrm A \in \mathbb{R}^{n \times n}$. Since $\mathrm x^\top \mathrm A \,\mathrm x$ is a scalar, then $(\mathrm x^\top \mathrm A \,\mathrm x)^\top = \mathrm x^\top \mathrm A \,\mathrm x$, i.e., $\mathrm x^\top \mathrm A^\top \mathrm x = \mathrm x^\top \mathrm A \,\mathrm x$. Hence, $$\mathrm x^\top \left(\frac{\mathrm A - \mathrm A^\top}{2}\right) \mathrm x = 0$$ Thus, the skew-symmetric part of matrix $\mathrm A$ does not contribute anything to the quadratic form. What is left is, then, the symmetric part $$\frac{\mathrm A + \mathrm A^\top}{2}$$ which is diagonalizable and has real eigenvalues and orthogonal eigenvectors, all nice properties. Addendum Taking affine combinations of $\mathrm A$ and $\mathrm A^\top$, we obtain $$\mathrm x^\top (\gamma \mathrm A + (1-\gamma) \mathrm A^\top) \mathrm x = f (\mathrm x)$$ which yields $f$ for all $\gamma \in \mathbb{R}$. Choosing $\gamma = \frac{1}{2}$, we obtain the symmetric part of $\mathrm A$.<|endoftext|> TITLE: Is a function that integrates to zero against all polynomials constant? QUESTION [7 upvotes]: Suppose $f := [0,\infty) \rightarrow \mathbb{R}$ satisfies $|f(x)| \leq e^{-x}$ for all $x \in (0,\infty)$, and also has the property that $$ \int_0^{\infty} f(x) x^n dx = 0 \qquad \forall n \in \{0,1,2,3,...\}. $$ Does it follow that $f$ is constant? If so, is this a standard theorem? Many thanks for your help. REPLY [2 votes]: For any real $s > 0$, the series for $e^{-sx}$ is an alternating series for $x > 0$, which gives an error bound in terms of the first neglected term: $$ e^{-x}\left|\sum_{n=0}^{N}(-1)^n\frac{(sx)^n}{n!}-e^{-sx}\right| \le e^{-x}\frac{(sx)^{N+1}}{(N+1)!}. $$ Integrating the right side in $x$ over $[0,\infty)$ and applying integration by parts repeatedly gives $$ \int_{0}^{\infty}e^{-x}\frac{(sx)^{N+1}}{(N+1)!}dx = s^{N+1}\int_{0}^{\infty}e^{-x}dx \rightarrow 0 \mbox{ as } N\rightarrow\infty \mbox{ for } 0 < s < 1. $$ Therefore, by your assumptions, $$ F(s)=\int_0^{\infty}f(x)e^{-sx}dx = \lim_{N\rightarrow\infty}\int_{0}^{\infty}f(x)\sum_{n=0}^{N}\frac{(-sx)^{n}}{n!}dx=0,\;\;\; 0 < s < 1. $$ At this point you can invoke uniqueness theorems about the Laplace transform in order to conclude that $f$ is $0$ a.e... Or you can use Morera's Theorem to show that $F$ is holomorphic for $\Re s > -1$ because (a) it is continuous and (b) integrals over triangles in this right half plane give $0$. Then, because $F$ is holomorphic and vanishes on a non-zero interval of the positive real axis, the identity theorem implies that $F$ is identically $0$ for $\Re s > -1$. Using that, $F(ir)$ is $0$, which is the Fourier transform of $f$, and that proves $f$ is $0$ a.e. by the Plancherel identity for Fourier transforms.<|endoftext|> TITLE: Differences between extended metric space and metric space QUESTION [5 upvotes]: Define a extended metric space $X$ to be a metric space $X$ except the distance function $d$ maps from $X \times X\to [0,\infty]$ (Note that $\infty$ is allowed). Looking at this post Metric assuming the value infinity it doesn't seem that any difficulty can arise when switching to an extended metric, as far as theorem go. However, compact sets are not necessarily bounded in extended metric spaces. (consider two points with distance infinity) Is there a general theorem about when theorems in metric spaces can be converted into theorems in extended metric spaces? It would also be nice to provide other examples of things failing when extended metric is used. REPLY [2 votes]: The reason why extended metric spaces are so similar to usual metric spaces is that, in addition to the obvious formal analogy, they share even deeper similarities: given an extended metric $d$ in $X$, define $$ d_1(x,y) = \text{min}\{1,\ d(x,y)\}, \quad \text{for all }x,y\in X. $$ Then $d_1$ is a bona fide metric and the topology defined by $d_1$ is the same as that defined by $d$ (en passant, this is the usual trick for showing that every metric space is equivalent to a bounded one). However, anything related to boundedness will be left out of the analogy as such concepts are not intrinsic to topology.<|endoftext|> TITLE: Funny double infinite sum QUESTION [57 upvotes]: I was playing with a modified version of Pascal's triangle (with ${n \choose k}^{-1}$ instead of $n \choose k$ everywhere) and this infinite sum popped out: $$\sum_{k=2}^{\infty}\sum_ {n=1}^{\infty} \frac{1}{n(n+1)(n+2)...(n+k-1)} $$ The partial sums seem to approach $\alpha \approx 1.317...$ Does a closed form for $\alpha$ exist? REPLY [15 votes]: [Not an answer, but too long to fit as a comment.] [Edit: @adjan points out that this is known as the Leibniz harmonic triangle, which I was unaware of.] I don't know if this is related or not: I noticed a curious fact a few years ago about reciprocals of binomial coefficients. If you take Pascal's triangle, but instead of putting $\binom{n}{k}$ in each entry, you put the reciprocal of $\,(n+1)\binom{n}{k}, $ you get an upside-down Pascal's triangle, with each number being the sum of the two numbers below it: \begin{array} \\&&&&&1 \\&&&&\frac12&&\frac12 \\&&&\frac13&&\frac16&&\frac13 \\&&\frac14&&\frac1{12}&&\frac1{12}&& \frac14 \\& \frac15&&\frac1{20}&&\frac1{30}&&\frac1{20}&&\frac15 \\\ .^{\large{.}^{\LARGE{.}}}&&\vdots&&\vdots&&\vdots&&\vdots&&{}^{{}^{{}^{\LARGE{.}}}}{}^{\hspace{-1mu}\large{.}}. \end{array} $$ $$ The proof that it works is straightforward: \begin{align}\require{cancel} \frac1{(n+1)\binom{n}{k}}+\frac1{(n+1)\binom{n}{k+1}}&=\frac{1}{n+1}\frac{\binom{n}{k}+\binom{n}{k+1}}{\binom{n}{k}\binom{n}{k+1}} \\&=\frac1{n+1}\binom{n+1}{k+1}\frac{k!\,(n-k)!}{n!}\frac{(k+1)!\,(n-k-1)!}{n!} \\&=\frac1{n+1}\frac{(n+1)!}{\bcancel{(k+1)!}\cancel{(n-k)!}}\frac{k!\,\cancel{(n-k)!}}{n!}\frac{\bcancel{(k+1)!}\,(n-k-1)!}{n!} \\&=\frac{\cancel{(n+1)!}}{\cancel{(n+1)\cdot n!}}\frac{k!\,(n-k-1)!}{n!} \\&=\frac1{n\binom{n-1}{k}}. \end{align} I have no idea if this is well-known or not — I hadn't come across it before.<|endoftext|> TITLE: Projection formula, Bott and Tu QUESTION [10 upvotes]: Bott and Tu, Proposition 6.15: Let $\pi: E\rightarrow M$ be an oriented rank $n$ vector bundle, $\tau$ a form on $M$ with compact support and $\omega$ a form with compact support along fiber with $\omega \in \Omega^q_{cv}(E)$ and $\tau \in \Omega_c^{m+n-q}(M)$. Then, with local product orientation on $E$ $$\int_{E} (\pi^* \tau) \wedge \omega=\int_{M}\tau\wedge \pi_{*}\omega$$ I do not understand why the integrant has compact support on $E$. I understand that $\pi^* \tau$ is zero outside a closed set and so product by $\omega$ in each fiber has compact support but this does not imply that it has compact support over all of $E$. Has Bott&Tu made a mistake here?? Edit: As the answer below shows, it is an error. And also see this: Tubular neighborhood: compact support for the pullback of a form with compact supoprt REPLY [6 votes]: This does indeed appear to be an error. For instance, consider the case where $m=n=1$ and $q=1$, $M=\mathbb{R}$, $E=M\times\mathbb{R}$ is the trivial bundle, and $\tau$ is nonzero on all of $[0,1]$. We could then have $\omega$ be a vertical $1$-form on $E$ which for each positive integer $n$ has a little bump on the set $[1/(n+1),1/n]\times[n,n+1]$, and is $0$ outside these sets. Then $\omega$ has compact support on each fiber, but $\pi^*\tau\wedge\omega$ does not have compact support, and may not even be integrable if $\omega$ gets large enough on its bumps. I believe the fix is to change the definition of "compact support along the fibers". You need to require not just that the intersection of the support of $\omega$ with each fiber is compact, but that the map from the support of $\omega$ to $M$ is a proper map. That is, for any compact set $K\subseteq M$, the support of $\omega$ on $\pi^{-1}(K)$ is compact. This certainly would solve the issue you have observed, since you can just take $K$ to be the support of $\tau$.<|endoftext|> TITLE: How do we know that graphs have the shapes they do? QUESTION [8 upvotes]: This is a pretty simple question I think, but I don't know how to answer it. For example, we all know that a parabola such as y=x^2 looks something like this: parabola But my question is, how do we know that the graph actually looks like this? We can't actually plot out an infinite amount of points to see that the graph follows a smooth pattern, but instead we can only individually plot points and we just connect them. Why do we just assume that the points can be connected? If anyone needs any clarification about my question, feel free to ask. REPLY [4 votes]: Continuity and smoothness aren't really enough by themselves to be certain that we've drawn a given graph correctly: maybe in the interval $(17.123487634827631, 17.123487634827632)$ there's a huge spike! We wouldn't see this just by plotting a bunch of points unless by sheer dumb luck we picked one in that interval, so how can we rule it out? To rule this out, we analyze the function. For example, we can prove that on positive reals, the function $f(x)=x^2$ is increasing; this rules out such a spike, because "half" the spike would have to be decreasing (think about it). Similarly we can figure out how fast $f$ can ever increase, in any interval (that is, we can find the maximum of the derivative of $f$), and so forth. Perhaps the most useful fact about $f$ in terms of graphing it is that $f$ is convex: if we let $L$ be the line connecting $(a, f(a))$ to $(b, f(b))$ (for $a TITLE: Models of the theory of real closed fields with extra constants QUESTION [6 upvotes]: Let $L$ be the language of real closed fields $\{0,1,+,-,\times\}$, and $T$ the theory of real closed fields, i.e. the $L$-sentences that are true in the standard $\mathbb{R}$ model. Let $L'$ be $L$ extended with a countable set of constants. If $\Gamma$ is a satisfiable countable set of $L'$ sentences that contains $T$, must it have a model that can be obtained from the standard $\mathbb{R}$ model of $L$ by assigning a real to each of the added constants? If yes, is the same true when there are $|\mathbb{R}|$-many added constants? When there are $|\mathbb{R}|$-many added constants and when $\Gamma$ has larger cardinality? If you create a new theory of a model by augmenting the language with constants, does the new theory contain sentences that says anything "original"? is a related question. REPLY [5 votes]: The other two answers are absolutely correct; let me give an answer in a slightly different direction. Say that a theory $T$ in the language of ordered fields + constants is $\mathbb{R}$-satisfiable if it has a model which is an expansion of $\mathbb{R}$ with the usual ordered field structure; I'm interested in the model-theoretic properties of $\mathbb{R}$-satisfiability. One natural question to ask is, "Is there a compactness theorem for $\mathbb{R}$-satisfiability?" The answer is no (and this provides another counterexample to the question you ask). Consider the theory $T$ consisting of the axioms of real closed fields, together with $c_i+10$, and $c_0>c_i$ for $i>0$. Then every finite subset of $T$ is $\mathbb{R}$-satisfiable, but $T$ itself is not, since $c_0$ would have to be infinite. What about "compactness in other cardinalities"? Say that $\mathbb{R}$-satisfiability is $(\kappa, \lambda)$-compact if whenever $\Gamma$ is a set of sentences of cardinality $<\lambda$, and every subset of cardinality $<\kappa$ is $\mathbb{R}$-satisfiable, then $\Gamma$ is $\mathbb{R}$-satisfiable. (So usual compactness is $(\omega, \infty)$-compactness, and countable compactness is $(\omega,\omega_1$)-compactness.) Is $\mathbb{R}$-satisfiability $(\omega_1, \omega_2)$-satisfiable? No! Given $\omega_1$-many constants $c_\eta$ ($\eta<\omega_1$), consider the theory $S=\{c_\alpha TITLE: Is $f(x)$ necessarily a polynomial if $f(f(x))$ is? QUESTION [6 upvotes]: If $g(x)$ is a polynomial, and $$g(x) = f(f(x))\ \forall x\in \mathbb{R}$$ is $f(x)$ necessarily a polynomial, given that $f$ is infinitely differentiable? Reading this question I noticed that the answer fails if we consider the domain to be the whole real line. I'm wondering whether removing the increasing condition allows for solutions that work across the whole real line, without allowing for "weird" functions like $$f(x) = \left|x\right|^{\sqrt{2}}$$ hence the infinitely differentiable condition. The only progress I've mad on this is as follows: Assume $g(x)$ has degree $d$ and leading coefficient $a$. Thus $$\lim_{x\to\infty} \frac{g(x)}{ax^d} = 1$$ $$\lim_{x\to\infty} \frac{f(f(x))}{ax^d} = 1$$ If $$x^{k-\epsilon} << f(x) << x^{k+\epsilon}\ \forall\ \epsilon>0$$ for some $k$ (which I think has to hold), then $$x^{k^2-\epsilon} << f(f(x)) << x^{k^2+\epsilon}$$ and thus $d=k^2$. I don't think this does much though. Does anyone have any ideas? REPLY [4 votes]: The answer is negative. Let $f$ be any involution on $\mathbb{R}$ i.e. any function whose graph is symmetric with respect to the line $y=x$. Then $g(x)=f(f(x)) = x$ is a polynomial, but not all involutions $f$ are polynomials.<|endoftext|> TITLE: If a collection of disjoint disks covers the unit square, then the circumferences add up to infinitude. QUESTION [6 upvotes]: A question from Makarov & Podykorytov, Real analysis: Measures, Integrals and Applications (Can't recall what page though, but it's in the chapter about product measure). Assume a collection of disjoint disks cover the unit square, $[0,1]^2$ up to a (Lebesgue) null set. Then, the sum of the lengths of their boundaries is infinitude. My attempt: We denote the disks $\{D_n\}_{n=1}^\infty $, their corresponding radii with $r_n$, the union $\bigcup_{n=1}^\infty D_n = C$ and Lebesgue measure on $\mathbb{R}^2$ with $m^2$, $\sum_{n=1}^\infty m^2(D_n) =\sum_{n=1}^\infty \pi r_n^2 =1 $ But this does not (and i think, cannot) produce a good bound on the sum of lengths of circumferences. Almost all (vertical and horizontal) cross-sections must have 1-dimensional measure $1$. I speculate this implies that up to a null-set, every cross-section intersects infinitely many disks, but did not manage to show this. If the last remark is true, then maybe we can show that almost-all cross-sections of the circumferences have positive measure, (as the fact that the cross-section intersects infintely many disks is encouraging). REPLY [2 votes]: Proof 1 This proof heavily uses a result in geometric measure theory, which is not really necessary here, but is really quick and was the first that came to mind. The result follows from Theorem 4.17 about the structure of Caccioppoli partitions in "Ambrosio, Fusco, Pallara - Functions of bounded variations and free discontinuity problems". Adapted to a partition with regular sets it states: Theorem Given a countable partition (up to a negligible set) $C=\bigcup_n D_n$ where $D_n$ are regular and $\sum_n P(D_n)<\infty$, $\mathcal{H}^1$-a.e. point of $C$ is contained in $$\left(\bigcup_n D_n\right)\cup\left(\bigcup_{m\neq n}(\partial D_n\cap \partial D_m)\right)$$ where $\mathcal{H}^1$ is the $1$-dimensional Hausdorff measure. In particular, $\mathcal{H}^1$-a.e. point of $\Gamma=\bigcup_n \partial D_n$ is contained in the second union. In our case however the second union is clearly a countable set (since every pair of circumferences share at most one point) and therefore $\mathcal{H}^1$-negligible, while $\Gamma$ has positive $\mathcal{H}^1$ measure. Therefore supposing that $\sum P(D_n)<\infty $ holds for the disks and applying the theorem we obtain a contradiction (in fact the argument works for every partition with e.g. strictly convex sets). Proof 2 This follows your proposed approach. As you already said, almost every vertical cross-section has measure $1$. Among these, suppose that a cross-section $v=\{x\}\times [0,1]$ intersects finitely many disks. In particular, $\{v\cap D_n\}_n$ gives a finite collection of disjoint intervals which has full measure inside $v$. It follows that $v\cap \Gamma$ is made of a finite number of points, which must belong to the boundary of two disks simultaneously, but we already noticed that this set is countable. Therefore a.e. cross-section intersects an infinite number of disks. From here we can conclude in different ways: 2.1 From the Area formula for retifiable sets, another tool from GMT (see Theorem 2.91 in the same reference): calling $\pi$ the projection on the $x$ axis $$\mathcal{H}^1(\Gamma)\geq \int_\Gamma J_1 (d^\Gamma\pi(z))d\mathcal{H}^1(z)=\int_0^1 \#\{\Gamma\cap \pi^{-1}(\{x\})\}dx=+\infty.$$ 2.2 From the Crofton formula, repeating the same argument above to obtain that a.e. cross-section in any possible direction intersects an infinite number of (boundaries of) disks. Note that this is basically the area formula in disguise, in a simpler setting and with a simpler proof. 2.3 Knowing this exercise comes from the section on product measures, we can conclude in a more elementary and pertinent way: observe first that $P(D_n)=\pi \mathcal{H}^1(d_n)$ where $d_n$ is the horizontal diameter of the disk $D_n$, and call $E=\bigcup_n d_n$. Now consider the measure $\mathcal{H}^1\otimes \mathcal{H}^0=\mathcal{L}^1\otimes \mu$ where $\mu$ is the counting measure. Then by Fubini \begin{align}\frac1\pi\sum_n P(E_n)=\sum_n \mathcal{L}^1(d_n)&=\int\limits_0^1 d\mu(y)\int\limits_0^1\chi_E(x,y) d\mathcal{L}^1(x) \\ &= \int\limits_0^1 d\mathcal{L}^1(x) \int\limits_0^1 \chi_E(x,y) d\mu(y) \\ &=\int\limits_0^1 d\mathcal{L}^1(x) \#(E\cap (\{x\}\times [0,1]))=+\infty \end{align} because any vertical cross-section intersecting $D_n$ will also intersect $d_n$.<|endoftext|> TITLE: What is the most expensive item I could buy with £50? QUESTION [71 upvotes]: I was set the following question during the discrete mathematics module of my degree and despite my instructor explaining his working to me I still disagree with the answer he says is correct. Can someone please help me either understand where my mistake is or help me prove that my instructor's answer is incorrect? It is the Christmas festive season again and your manager is very pleased with your performance and gives you a £50 Amazon gift card as a Christmas bonus. Value added tax (VAT) or sales tax is currently 20%. Determine the price of the most expensive taxable item you can buy with the gift card. Show your working and not just the answer. It's a pretty horribly worded question! My gut feeling was £50, as all UK retail prices are already inclusive of VAT. My Answer: Let the total price inclusive of VAT $=x=$ £50 Let the rate of VAT $=y=0.2\ (20\%)$ Let the price exclusive of VAT $= z$ $x = z + zy$ $50 = z + 0.2z$ $50 = 1.2z$ $50 / 1.2 = z$ $z = 41.666...$ Instructor's Answer: Let the price of the most expensive taxable item be $= x$ Let the 20% VAT on £50 $= y = (20/100)*50 = 10$ Our equation can be written as: $50 = x + y$ $x = 50-y$ $x = 50-10$ $x = 40$ Update from instructor: It looks like you and I are going to have some interesting discussions during the course of this module. I see where you are “going wrong” for want of a better phrase. You are assuming that the £50 includes VAT, but that is the wrong assumption. Sometimes easy to make that automatic jump or connection to real life scenario, but this question has nothing to do with the actual UK VAT laws. Maybe it could have been phrased differently, but the £50 is SUBJECT to a 20% VAT which implies that VAT is not included and has to be deducted from the £50. Forget the UK law for now and you will see why it’s £40. The point really is not even about how much the item is, but it is about rearranging an equation and solving for x. In my opinion there are so many errors in his logic that it's not worth me pushing this point any further as he will be my teacher for the next 3 months anyway. REPLY [2 votes]: The correct answer is, of course, 50 pounds. There are items that have a reduced (5%) VAT and even some with no VAT added (food, I think). So you just have to find one of those items.<|endoftext|> TITLE: Does the sum $\sum_{n \geq 1} \frac{2^n\operatorname{mod} n}{n^2}$ converge? QUESTION [40 upvotes]: I am somewhat a noob, and I don't recall my math preparation from college. I know that the sum $\displaystyle \sum_{n\geq 1}\frac{1}{n}$ is divergent and my question is if the sum$$\sum \limits _{n\geq 1}\frac{2^n\mod n}{n^2}$$converges. I think is not but I do not know how to prove that! Thanks! REPLY [5 votes]: The point of this post is to give some plots that corroborate Sangchul Lee's argument. These were produced by Mathematica The first plot lists the numbers $(2^n\mod n)/n$ for all multiples of $5$. You see the horizontal lines at multiples of $0.2$. They were already visible in the unrestricted plot, when I didn't restrict $n$ to multiples of five, but here they are easier to spot: When we only include the values $n=5p$, the picture is even clearer: Multiples of five are not the only structure in there. Restricting the choice of $n$ to numbers of the form $n=7p$ gives something quite similar.<|endoftext|> TITLE: Calculating the integral $\int\limits_{0}^{2\pi} \frac{d \theta}{a^2 \sin^2\theta+b^2 \cos^2\theta}$ QUESTION [5 upvotes]: I wanted to calculate $$\int\limits_{0}^{2\pi} \frac{d \theta}{a^2 \sin^2\theta+b^2 \cos^2\theta}$$ So I solved the indefinite integral first (by substitution): $$\int\frac{d \theta}{a^2 \sin^2\theta+b^2 \cos^2\theta}=\frac{1}{b^2}\int\frac{d \theta}{\cos^2\theta \left(\frac{a^2}{b^2} \tan^2\theta+1 \right)} =\left[u=\frac{a}{b}\tan\theta, du=\frac{a}{b\cos^2\theta} d\theta \right ]\\=\frac{1}{b^2}\int\frac{b}{a\left(u^2+1 \right)}du=\frac{1}{ab}\int\frac{du}{u^2+1}=\frac{1}{ab} \arctan \left(\frac{a}{b}\tan\theta \right )+C$$ Then: $$\int\limits_{0}^{2\pi} \frac{d \theta}{a^2 \sin^2\theta+b^2 \cos^2\theta}=\frac{1}{ab} \arctan \left(\frac{a}{b}\tan (2\pi) \right )-\frac{1}{ab} \arctan \left(\frac{a}{b}\tan 0 \right )=0$$ Which is incorrect (the answer should be $2\pi/ab$ for $a>0,b>0$). On the one hand, the substitution is correct, as well as the indefinite integral itself (according to Wolfram it is indeed $\frac{1}{ab} \arctan \left(\frac{a}{b}\tan\theta \right )$ ), but on the other hand I can see that had I put the limits during the substitution I'd get $\int\limits_{0}^{0} \dots = 0$ because for $\theta = 0 \to u=0$ and for $\theta = 2\pi \to u=0$. Why is there a problem and how can I get the correct answer? Edit: Here is Wolfram's answer: Wolfram is correct because $$\frac{a^2 b^2}{2}\int\limits_{0}^{2\pi} \frac{d \theta}{a^2 \sin^2\theta+b^2 \cos^2\theta}$$ is the area of an ellipse (defined by $x=a\cos t , y=b\sin t$), that is $$\frac{a^2 b^2}{2}\int\limits_{0}^{2\pi} \frac{d \theta}{a^2 \sin^2\theta+b^2 \cos^2\theta}=\pi ab$$ REPLY [10 votes]: The substitution is incorrect : the tangent is not bijective on the interval $[0,2\pi]$. First, you need to restrict yourself to an interval on which the tangent behaves better. Using the $\pi$-periodicity of the function you want to integrate, you can show that: $$\int_0^{2 \pi} \frac{1}{a \sin^2 (\theta)+b \cos^2 (\theta)} d \theta = 2 \int_{-\pi/2}^{\pi/2} \frac{1}{a \sin^2 (\theta)+b \cos^2 (\theta)} d \theta,$$ and go from there. Note that this is a good warning about using Wolfram (or any formal computation system) : the formula for the indefinite integral is good, but it holds only on each interval $(k\pi -\pi/2, k\pi+\pi/2)$, which the program does not tell you.<|endoftext|> TITLE: Power set of a set with an empty set QUESTION [8 upvotes]: When a set has an empty set as an element, e.g.$ \{\emptyset, a, b \}$. What is the powerset? Is it: $$ \{ \emptyset, \{ \emptyset \}, \{a\}, \{b\}, \{\emptyset, a\} \{\emptyset, b\}, \{a, b\}, \{\emptyset, a, b\}\}$$ Or $$ \{ \emptyset, \{a\}, \{b\}, \{\emptyset, a\} \{\emptyset, b\}, \{a, b\}, \{\emptyset, a, b\}\}$$ Or $$ \{ \{\emptyset\}, \{a\}, \{b\}, \{\emptyset, a\} \{\emptyset, b\}, \{a, b\}, \{\emptyset, a, b\}\}$$ The confusion arises for me because, the powerset of every non-empty set has an empty set. Well the original set already has the empty set. So we don't need a subset with an empty set. Somehow, the first one seems correct. Yet, I can't seem to accept it. REPLY [2 votes]: Your suggestions differ by having $\emptyset$ and/or $\{\emptyset\}$ included or not. We have $\emptyset\in\mathcal P(X)$ because $\emptyset\subseteq X$ (which would hold for any other $X$ as well) We have $\{\emptyset\}\in\mathcal P(X)$ because $\{\emptyset\}\subseteq X$ (which is the case because $\emptyset\in X$ in this specific problem) Therefore, your first variant is correct (and the other two are incorrect because $\emptyset\ne\{\emptyset\}$).<|endoftext|> TITLE: Find a closed form to $\sum\limits_{i=2}^{n} \frac{H_i}{i+1}$ QUESTION [5 upvotes]: So I'm trying to do this annoying proof and without going into further details I think after quite a while of thinking I found it. Now I get stuck with an annoying sum (of sums..) where I don't quite know if there exists a closed form and if so how to find it. So as I already said I try to find a closed form to the following series $\sum\limits_{i=2}^{n} \frac{H_i}{i+1}$ where $H_n$ is the harmonic series ($\sum\limits_{i=1}^{n} \frac{1}{i}$). So yeah any hint for a closed form of the above series is more than welcome! Thanks in advance for any help. REPLY [4 votes]: It can be proven also using summation by parts $$S=\sum_{i=2}^{n}\frac{H_{i}}{i+1}=\sum_{i=2}^{n}\frac{1}{i+1}\cdot H_{i}=H_{n}\left(H_{n+1}-\frac{3}{2}\right)-\sum_{i=2}^{n-1}\frac{\left(H_{i+1}-\frac{3}{2}\right)}{i+1} $$ hence $$\sum_{i=2}^{n}\frac{H_{i}}{i+1}=H_{n}\left(H_{n+1}-\frac{3}{2}\right)-\sum_{i=2}^{n-1}\frac{H_{i+1}}{i+1}+\frac{3}{2}\sum_{i=2}^{n-1}\frac{1}{i+1} $$ $$=H_{n}H_{n+1}-\sum_{i=2}^{n}\frac{H_{i}}{i+1}+\frac{H_{n}}{n+1}-\sum_{i=2}^{n-1}\frac{1}{\left(i+1\right)^{2}}-\frac{9}{4} $$ so $$\sum_{i=2}^{n}\frac{H_{i}}{i+1}=\color{red}{\frac{1}{2}\left(H_{n}H_{n+1}+\frac{H_{n}}{n+1}-H_{n}^{\left(2\right)}-1\right)}.$$ Note that this is the same result of the other answers, since $$H_{n}=H_{n+1}-\frac{1}{n+1}.$$<|endoftext|> TITLE: infinite dimension representations of an abelian group QUESTION [5 upvotes]: How do you show that an abelian group has no irreductible infinite dimension representation? Is it only true for a locally compact group? The only proof I found myself is to study maximal ideals of the group C*-algebra. Do you know a simple proof without operator algebras theory? REPLY [3 votes]: Let $K=\mathbb{C}(t)$, the field of rational functions. Then $K$ is an infinite dimensional irreducible complex representation of the multiplicative group $K^\times$.<|endoftext|> TITLE: Relationship between Taylor and Weierstrass theorem QUESTION [7 upvotes]: I'm not a mathematician, but these two theorems sound related to me. Taylor's theorem. Every k-times differentiable function can be approximated in a neighborhood around a given point by a k-th order polynomial to an arbitrary degree. Weierstrass theorem. Every continuous function defined on a closed interval $[a, b]$ can be approximated to an arbitrary degree by a polynomial function. (The statements are probably not precise.) I always wondered what is the underlying relationship between those two? Does one imply the other, or are they each special cases of some more general result? REPLY [4 votes]: Morally, these two theorems shouldn't say much about each other. If either were to imply the other, it seems less absurd that Weierstrass implies Taylor: if all we know is Taylor then we only have very weak data about a generic continuous function (if $f$ were to be absolutely continuous then we could perhaps use its antiderivative to some end, but generally speaking, no). But this direction also seems unlikely, because Weierstrass gives no control at all on the degree of the polynomial. As rych points out, the two kinds of approximations suggested by these two theorems are different. Weierstrass' approximation is uniform which means that no point in the approximation can be further than a specified tolerance from the original. But Taylor's approximation is a much more subtle condition: no point $p(x)$ in the approximation can be further than $\varepsilon (x-a)^k$ from $f(x)$. This is a stronger condition: it means, for instance, that $p(a)=0$, but this need not be true at any point of a Weierstrass approximation.<|endoftext|> TITLE: Interior and accumulation points QUESTION [5 upvotes]: Show that every interior point of a set must also be an accumulation point of that set. Definitions: Any point $x$ that belongs to $E$ is said to be an interior point of $E$ provided that some interval $(x-c,\ x+c)\subset E$. Any point $x$ (not necessarily in $E$) is said to be an accumulation point of $E$ provided that for every $c>0$ the intersection $(x-c,\ x+c)\cap E$ contains infinitely many points. How do I show that every interior point of a set must also be an accumulation point of that set from these 2 definitions? REPLY [2 votes]: If $(x-c, x+c) \subset E$, then $(x-c,x+c) \cap E = (x-c,x+c)$ which is an interval that contains infinitely many points for any $c > 0$. From the first definition, we know that for small enough $c_0$, we have $(x-c_0,x+c_0) \subset E$. If $c \le c_0$, then $(x-c,x+c) \cap E = (x-c,x+c)$ contains infinitely many points. Otherwise if $c > c_0$, then $(x-c_0,x+c_0) \subset E$ contains infinitely many points and hence $(x-c_0,x+c_0) \subset (x-c,x+c)\cap E$ implies the superset also contains infinitely many points.<|endoftext|> TITLE: Finding numbers satisfying a given condition QUESTION [5 upvotes]: How many 4-digit number exists , such that the sum of it's digit is $29$ & also the number is divisible by $29$? I literally don't know how to approach this question . What is the basic concept that would be used in solving these type of questions ?Please let me know !! REPLY [12 votes]: the smallest four digit number which is divisible by $29$ is $1015$. Hence all your numbers are of the form $1015+29k$. Now, the digit sum is $29$ which implies that the iterated digit sum is $2$, thus we only want to consider numbers which are $2\pmod 9$. The smallest number of the form $1015 +29k$ with this property is $1073$. Thus all your numbers are of the form $1073+(29\times 9)k=1073+261k$. There are only $35$ such numbers and it is now easy to check by hand (well, tedious but doable with pencil and paper. Effortless with a calculator). We see that the only examples are $$\{4988,7598,7859,9686,9947\}$$ REPLY [5 votes]: Let's write the number as $abcd$ where $a$, $b$,$c$ and $d$ are the digits. Your condition is equivalent to \begin{align} a+b+c+d &= 29 \\ 1000a+100b+10c+d &= 29*k \quad k\in \mathbb{N} \end{align} You have to consider all the possible ways $abcd$ can be arranged and check which ones satisify the above conditions. There is possibly a clever number theoretic inspired algorithm to do this but you can do a brute force approach. The python code below gives me the following result. $$ 4988\quad 7598\quad 7859\quad 9686\quad 9947 $$ Code: import numpy as np possible_number_list = np.arange(1000,10000,1) for i in possible_number_list: if i%29==0 and sum(int(digit) for digit in str(i))==29: print i<|endoftext|> TITLE: A manifold for Hilbert's hotel QUESTION [29 upvotes]: Well, after recently answering a Hilbert's Hotel question, I've started to think: If an infinite number of people arrive, the solution is that every guest goes to the room with twice the number. However, if one imagines the doors one after the other (that is, in the order of the natural numbers), this means that each guest has to walk a distance that is proportional to the number of his room, which is unbounded. Assuming a constant walking speed, therefore also the move time is unbounded. That is not really a satisfying solution. But this is easily solved: Just build the hotel in an infinite-dimensional Euclidean space, where room $n$ sits at the point which has coordinates $x_k=\delta_{nk}$ where $\delta$ is the Kronecker delta. That way, the distance between any two rooms is $\sqrt{2}$, and any room changing operation, no matter how complicated, can be done in constant time. So far, so good. However, let's assume the guests in Hilbert's Hotel are conventional 3-dimensional beings, and therefore they must live on a 3-dimensional manifold; they would die in an unconstrained higher-dimensional space, let alone an infinite-dimensional one. Therefore my question: Does there exist a 3-dimensional Riemannian manifold which has a countably infinite number of points such that any two of them have the same finite distance on the manifold? REPLY [4 votes]: For posterity here's an implementation of Micah's suggestion. Let $(\rho, \theta, \phi)$ denote spherical coordinates on the open unit ball, $\Omega = d\phi^{2} + \sin^{2}\phi\, d\theta^{2}$ the round metric on the unit sphere, and $f(\rho) = \rho/(1 - \rho)$ (or any smooth, monotone function defined for $0 \leq \rho < 1$ with $f(0) = 0$ and, as $\rho \to 1^{-}$, $f \to \infty$ rapidly enough that $f$ is not improperly integrable). In the metric $$ g = d\rho^{2} + f(\rho)^{2}\, \Omega, $$ the distance from the origin to the boundary of the ball is unity, but the volume element is $$ dV = f(\rho) \sin\phi\, d\rho\, d\theta\, d\phi. $$ Geometrically, the intrinsic radii of spherical shells centered at the origin grow rapidly enough that the volume is infinite. In this universe, there exist countably many "cells" of fixed volume (though not suitable as three-dimensional hotel rooms, as asymptotically they necessarily become "intrinsically thin in the radial direction"), but any two rooms (or points) are separated by a distance of at most $2$ because the origin is at most one unit away from each room.<|endoftext|> TITLE: axiomatic definition of trigonometric functions QUESTION [9 upvotes]: A friend told me that in addition to the axioms for the real numbers, it can be proved (without appeal to sine and cosine) that a function exists satisfying the following conditions: $C(a-b)=C(a)C(b)+S(a)S(b)$ $ S(x) \geq 0 ,\forall x \in [0,p]$ $ S(p)=1$ This would allow an alternative definition of sine, cosine and even $\pi$, without using geometry, calculus or non-elementary arguments. See Timothy Gowers blogpost for a discussion of how difficult it can be to define sine. Now, using the conditions as 'axioms', I managed to show that: $C(x)$ and $S(x)$ were both periodic with period $4p$ $C^2(x)+S^2(x)=1$ $C(x+p)=-S(x)$ $S(x+p)=C(x)$ And, I found that if I defined $ \alpha_n= S(\frac{p}{2^n})$ and $\epsilon := \frac{p}{2^n}$, then I could show that $ S(x)$ could be defined as a function for countably infinite points $B = \{k \in \mathbb{Z},n \in \mathbb{N}:n\epsilon+kp\} \subset \mathbb{R}$, and simultaneously show that $\alpha_n$ was strictly decreasing. However, after this point I got stuck. I didn't manage to show the existence and uniqueness of $ S(x), \forall x \in \mathbb{R}_+\setminus B$. Can this be done without using geometry? Note: The fact that $S$ is a function is something to be proven. Writing $S(x)$ assumes functionness. So we should really be careful that we don't give circular arguments. REPLY [2 votes]: First I show that $S$ and $C$ are continuous. You can easily show that the following hold: $C(x) = S(p-x)$, $S(x\pm y) = S(x)C(y) \pm C(x)S(y)$, $S(-x) = -S(x)$, $C(x) \ge 0$ if $x \in [-p,p]$, $S(p/2) = C(p/2) = 2^{-1/2}$. It follows that, when $x \in [-p,p]$ and $y \in [0,p]$, $$S(x+y)-S(x-y) = 2C(x)S(y) \ge 0,$$ so that $S$ is increasing on $[-p,p]$. Also, if $x \in [0,p]$, then $$S(x) = 2S(x/2)C(x/2) = 2S(x/2)S(p-x/2) \le 2S(x/2)S(p/2) = 2^{1/2}S(x/2),$$ so that by induction we get for nonnegative integer $n$ $$S(2^{-n}p) \le 2^{-n/2}.$$ Now we may show $S$ is continuous at $0$: Given any $\epsilon > 0$, choose $n$ large enough so that $2^{-n/2} < \epsilon$, and let $\delta = 2^{-n}p$. Then if $|x| < \delta$, $$|S(x)| = |S(|x|)| \le |S(2^{-n}p)| \le 2^{-n/2} < \epsilon.$$ Now when $x \in [-p,p]$ we have $$1 - S(x)^2 = C(x)^2 \le C(x) \le 1$$ and the squeeze theorem applies to show that $C$ is continuous at $0$. Now $S$ is continuous everywhere, because for any $x \in \mathbb{R}$, $$\lim_{h \to 0} S(x+h) = \lim_{h\to 0} [S(x)C(h)+C(x)S(h)] = S(x).$$ Thus $C$ is also continuous everywhere (since $C(x) = S(p-x)$). Next I show that $S$ and $C$ are uniquely defined on a dense subset of $\mathbb{R}$. Note that, if $x \in [0,p]$ then $$C(x) = C(x/2)^2 - S(x/2)^2 = 2C(x/2)^2 - 1$$ which, together with $C(x/2) \ge 0$, implies $$C(x/2) = \sqrt{\frac{C(x)+1}{2}}.$$ Now suppose that $S'$ and $C'$ are another pair of functions satisfying the axioms. Then $C'$ satisfies the same equation, so we can show by induction that for integers $n \ge 0$, $$C(2^{-n}p) = C'(2^{-n}p).$$ Then since $S(x) = \sqrt{1 - C(x)^2}$ for $x \in [0,p]$ we get $$S(2^{-n}p) = S'(2^{-n}p).$$ Therefore, by the addition formulas we can see that for all $m \in \mathbb{Z}$, $$S(2^{-n}mp) = S'(2^{-n}mp) \text{ and } C(2^{-n}mp) = C'(2^{-n}mp).$$ Now the set $\{2^{-n}mp \mid m,n \in \mathbb{Z}, n \ge 0\}$ is dense in $\mathbb{R}$, so continuity implies $S = S'$ and $C = C'$. Finally, the functions $\sin(\pi x/{2p})$ and $\cos(\pi x/{2p})$ satisfy the axioms, so $S(x) = \sin(\pi x/{2p})$ and $C(x) = \cos(\pi x/{2p})$. Note: This proves that $\sin$ and $\cos$ are continuous (which I had not assumed). Edit: I suppose I haven't proved existence (except by appealing to the existence of $\sin$ and $\cos$). But I believe this works: I already showed that $S$ and $C$ are uniquely defined on the dense set $A = \{2^{-n}mp \mid m,n \in \mathbb{Z}, n \ge 0\}$. So if we can prove that $S$ is uniformly continuous, then it would extend (uniquely) to a continuous function on all of $\mathbb{R}$. But for all $\epsilon > 0$, choose $n$ large enough so that $2^{-n/2} < \epsilon/2$, and let $\delta = 2^{-n}p$. Then if $|h| < \delta$, then from the proof that $S$ is continuous at $0$ we have $|S(h)| < \epsilon/2$ and $|1-C(h)| \le 1-C(h)^2 = S(h)^2 \le |S(h)| < \epsilon/2$, so \begin{align} |S(x + h) - S(x)| &= |S(x)C(h) + C(x)S(h) - S(x)| \\ &\le |S(x)|\,|1-C(h)| + |C(x)|\,|S(h)| \\ &< 1 \cdot \epsilon/2 + 1 \cdot \epsilon/2 = \epsilon. \end{align} So $S$ (and therefore $C$) is uniformly continuous.<|endoftext|> TITLE: Substitution Makes the Integral Bounds Equal QUESTION [15 upvotes]: This seems like a really basic calculus question, which is a tad embarrassing since I'm a graduate student, but what does it mean when a substitution in a definite integral makes the bounds the same? For example, if we have some function of $\sin(x)$: $$\int_0^{\pi} f(\sin(x)) \,\mathrm{d}x$$ If we make the substitution $u = \sin(x)$, then $du = \cos(x)\,\mathrm{d}x$, we find $$\int_{\sin(0)}^{\sin(\pi)} \frac{f(u)}{\cos(x)} \,\mathrm{d}u = \int_0^0 \frac{f(u)}{\sqrt{1-u^2}} \,\mathrm{d}u$$ This would imply that the integral is zero. Is this always the case? For another example (more relevant to the problem I'm actually trying to solve) consider $$\int_{-b}^{b} \frac{1}{\sqrt{x^2 + a^2}}\,\mathrm{d}x$$ Clearly this can be solved using a trigonometric substitution to get $2\operatorname{arcsinh}(b)$, but what if I substituted $u = \sqrt{x^2 + a^2}$? Then $$\mathrm{d}u = \frac{x\,\mathrm{d}x}{\sqrt{x^2 + a^2}} \implies \mathrm{d}x = \frac{u\,\mathrm{d}u}{x} = \frac{u\, \mathrm{d}u}{\sqrt{u^2 - a^2}},$$ so the integral becomes $$\int_{\sqrt{b^2 + a^2}}^{\sqrt{b^2 + a^2}} \frac{1}{\sqrt{u^2 - a^2}}\,\mathrm{d}u$$ This integral seems to be zero, which is not the case for the integral before the substitution. What's going on here? Does this just mean that these substitutions are not valid? REPLY [3 votes]: The first integral is NOT zero! Let $f$ be the identity function, for example. In the second integral it is wrong to say that $x=\sqrt{u^2-a^2}$ for all values of $x$. In the first integral, the same: $cos(x)=\sqrt{1-u^2}$ is not true for all values of $x$. When the substitution is not injective, problems arise when you try to express the integrand in terms of the new variable, as it can be seen from this examples. So always split the integration domain so that there is injectivity in each part.<|endoftext|> TITLE: What is $\lim\limits_{n\to\infty}\frac {1}{n^2}\sum\limits_{k=0}^{n}\ln\binom{n}{k} $? QUESTION [8 upvotes]: It was originally asked on another website but nobody has been able to prove the numerical result. The attempts usually go by Stirling's approximation or try to use the Silverman-Toeplitz theorem. REPLY [3 votes]: Here is as exact an answer as you want, from my answer here: Prove that $\prod_{k=1}^{\infty} \big\{(1+\frac1{k})^{k+\frac1{2}}\big/e\big\} = \dfrac{e}{\sqrt{2\pi}}$ $$\prod_{k=0}^n \binom{n}{k} \sim C^{-1}\frac{e^{n(n+2)/2}}{n^{(3n+2)/6}(2\pi)^{(2n+1)/4}} \exp\big\{-\sum_{p\ge 1}\frac{B_{p+1}+B_{p+2}}{p(p+1)}\frac1{n^p}\big\}\text{ as }n \to \infty $$ where $$\begin{align} C &= \lim_{n \to \infty} \frac1{n^{1/12}} \prod_{k=1}^n \big\{k!\big/\sqrt{2\pi k}\big(\frac{k}{e}\big)^k\big\}\\ &\approx 1.04633506677...\\ \end{align} $$ and the $\{B_p\}$ are the Bernoulli numbers, defined by $$\sum_{p \ge 0} B_p\frac{x^p}{p!} = \frac{x}{e^x-1} .$$ Taking logs, $$\sum_{k=0}^n \ln\binom{n}{k} \sim \ln C+n(n+2)/2-(3n+2)\ln n/6-(2n+1)\ln(2\pi)/4 -\sum_{p\ge 1}\frac{B_{p+1}+B_{p+2}}{p(p+1)}\frac1{n^p}\\ \text{ as }n \to \infty $$ Dividing by $n^2$, $$\dfrac1{n^2}\sum_{k=0}^n \ln\binom{n}{k} \to \dfrac12 \text{ as }n \to \infty $$<|endoftext|> TITLE: Seeking an intuitive explanation of the Mapping Class Group QUESTION [7 upvotes]: For a surface $S$ the mapping class group $MCG(S)$ of $S$ is defined as the group of isotopy classes of orientation preserving diffeomorphisms of $S$: $$MCG(S)=Diff^+(S)/Diff_0(S).$$ I understand this definition as well as all of its component pieces. What I don't understand is why this quotient is a natural thing to study. Specifically, I can see why the full diffeomorphism group $Diff(S)$ would be natural to study, and if $S$ happens to be orientable, I can see why it would be reasonable to restrict ones attention to $Diff^+(S)$. However, I don't see why the quotient is a natural or intuitive next step. Is there a good explanation why diffeomorphisms that are isotopic to the identity are 'uninteresting'? Thanks! REPLY [2 votes]: To explain why this is interesting, let's look at an analysis of a particular case, namely $S = T^2$. From algebraic topology we learn about the homology functor, from which we deduce that there is a homomorphism $$\text{Diff}^+(T^2) \to \text{Aut}(H_1(T^2;\mathbb{Z})) \approx \text{SL}(2,\mathbb{Z}) $$ Thinking about the construction of $T^2$ as the quotient $\mathbb{R}^2 / \mathbb{Z}^2$, and thinking about the fact that elements of the group $\text{SL}(2,\mathbb{Z})$ act on $\mathbb{R}^2$ preserving the orbits of the additive action of $\mathbb{Z}^2$, one quickly concludes that the above homomorphism is surjective. Once you've gone that far, nothing is more natural from an algebraic perspective to ask: what is the kernel of that homomorphism? Because, once you figure out that the kernel is $\text{Diff}_0(T^2)$, you get an amazing theorem: $$\text{MCG}(T^2) \approx \text{Diff}^+(T^2) / \text{Diff}_0(T^2) \approx \text{SL}(2,\mathbb{Z}) $$ And once you have that theorem in front of you, what is more natural than to generalize, and to ask "What can I say about $\text{Diff}^+(S)/\text{Diff}_0(S)$ in the general case?"<|endoftext|> TITLE: How do I find a flaw in this false proof that $7n = 0$ for all natural numbers? QUESTION [43 upvotes]: This is my last homework problem and I've been looking at it for a while. I cannot nail down what is wrong with this proof even though its obvious it is wrong based on its conclusion. Here it is: Find the flaw in the following bogus proof by strong induction that for all $n \in \Bbb N$, $7n = 0$. Let $P(n)$ denote the statement that $7n = 0$. Base case: Show $P(0)$ holds. Since $7 \cdot 0 = 0$, $P(0)$ holds. Inductive step: Assume $7·j = 0$ for all natural numbers $j$ where $0 \le j \le k$ (induction hypothesis). Show $P(k + 1)$: $7(k + 1) = 0$. Write $k + 1 = i + j$, where $i$ and $j$ are natural numbers less than $k + 1$. Then, using the induction hypothesis, we get $7(k + 1) = 7(i + j) = 7i + 7j = 0 + 0 = 0$. So $P(k + 1)$ holds. Therefore by strong induction, $P(n)$ holds for all $n \in \Bbb N$. So the base case is true and I would be surprised if that's where the issue is. The inductive step is likely where the flaw is. I don't see anything wrong with the strong induction declaration and hypothesis though and the math adds up! I feel like its so obvious that I'm just jumping over it in my head. REPLY [45 votes]: As a general rule: For fake induction proofs, find the smallest case where the conclusion does not hold, and then do each step in detail with the corresponding numbers inserted, so that it should proof that exact case. That way you will almost always quickly find the problem. In this case, the smallest failing case is $P(1)$, as that claims $7\cdot 1=0$ which is clearly wrong. Therefore the number to look at is $k+1=1$, that is, $k=0$. So let's look at the inductive step, and insert $k=0$: Inductive step: Assume $7\cdot j=0$ for all natural numbers $j$ where $0\le j\le 0$ (induction hypothesis). Show $P(k+1): 7(k+1)=0$. The only number with $0\le j\le 0$ is $j=0$, so the induction hypothesis is that $7\cdot 0=0$, which clearly is true. Write $0+1=i+j$, where $i$ and $j$ are natural numbers less than $k+1$. The only natural number less than $1$ is $0$. Therefore we have to write $0+1 = 0+0$ … oops, that's not right! Error found!<|endoftext|> TITLE: How to compute $\mathbb{E}(\exp(\int_0^t W_s ds)|W_t)$? QUESTION [14 upvotes]: I am trying to compute the conditional expectation $$\mathbb{E}\left[\exp\left(\int_0^t W_s ds\right)\middle|\, W_t\right]$$ where $W$ is a standard Wiener process and where $s\le t$. To initially simplify the problem, I have started with the calculations of $\mathbb{E}[W_s|W_t]$ and $\mathbb{E}\left[\int_0^t W_s \,ds\middle|\,W_t\right]$. On the one hand, since $W_t$ and $W_s- \frac{s}{t}W_t$ are independent (having zero covariance and using a gaussian vector argument), we can see that: $$\mathbb{E}\left[W_s\middle | W_t\right]=\mathbb{E}\left[W_s-\frac{s}{t}W_t\middle |\, W_t\right]+\frac{s}{t}W_t=\frac{s}{t}W_t$$ On the other hand, by independence of $W_t$ and $\int_0^t (W_s- \frac{s}{t}W_t)ds$: \begin{align}\mathbb{E}\left[\int_0^t W_s ds\middle|\,W_t\right]&=\mathbb{E}\left[\int_0^t \left(W_s-\frac{s}{t}W_t\right) ds\,\middle |\, W_t\right]+\frac{t}{2}W_t\\[0.3cm]&=\int_0^t \mathbb{E}\left[W_s-\frac{s}{t}W_t\right]ds+\frac{t}{2}W_t=\frac{t}{2}W_t\end{align} Coming back to our initial problem, we thus have: $$\mathbb{E}\left[\exp\left(\int_0^t W_s ds\right)\,\middle|\,W_t\right]=\exp\left(\frac{t}{2}W_t\right)\mathbb{E}\left[\exp\left(\int_0^t \left(W_s-\frac{s}{t}W_t\right) ds\right)\,\middle|\,W_t\right]$$ We also know that $\int_0^t \left(W_s-\frac{s}{t}W_t\right)ds$ is normally distributed with zero mean (easy to see) and variance given by: $$\mathbb{E}\left[\int_0^t\int_0^t \left(W_s- \frac{s}{t}W_t\right)\left(W_u- \frac{u}{t}W_t\right)dsdu\right]=\int_0^t\int_0^t\left(\min(s,u)-\frac{su}{t}\right)dsdu=\frac{t^3}{12}$$ By independence of $W_t$ and $\exp\left(\int_0^t \left(W_s- \frac{s}{t}W_t\right)ds\right)$, we finally obtain ($Z$ being a standard unit normal variable): $$\mathbb{E}\left[\exp\left(\int_0^t W_s ds\right)\,\middle|\,W_t\right]=\exp\left(\frac{t}{2}W_t\right)\mathbb{E}\left[\exp\left(Z\sqrt{\frac{t^3}{12}}\right)\right] =\exp\left(\frac{t}{2}W_t+\frac{t^3}{24}\right)$$ However, I am not sure if this answer and the arguments I have used are correct? Any ideas or comments would be greatly appreciated. REPLY [2 votes]: A systematic way to do this: Compute the joint distribution of the two Gaussian variables $Y:=\int_0^T W_t dt$ and $X:=W_T$ and then Evaluate the conditional distribution, which we know is obtained from the following least squares regression: $$ Y = \alpha + \beta X + Z, \label{LSQ}\tag{1}$$ where $Z$ is zero mean Gaussian variable independent of $X$ with variance $$\sigma^2_Z = \mathrm{Var}(Y-\beta X)=\mathrm{Var}(Y)-\beta^2\mathrm{Var}(X),\label{sigZ}\tag{2}$$ $\beta$ is the least squares slope coefficient, $$\beta = \frac{\mathrm{Cov}(X,Y)}{\mathrm{Var}(X)},\label{beta}\tag{3}$$ and $\alpha$ is the so-called intercept, chosen to make the mean of $Z$ zero, $$ \alpha = E[Y]-\beta E[X].\label{alpha}\tag{4}$$ In summary this gives the conditional distribution formula $$ Y\mid X \sim N(\alpha +\beta X,\sigma^2_Z).\label{Y|X}\tag{5}$$ Finally evaluate the conditional expectation using the moment generating function formula for a Gaussian random variable $$E[\exp(Y)\mid X] =\exp(\alpha +\beta X + \sigma^2_Z/2).\label{MGF}\tag{6}$$ It remains to compute the individual ingredients. 1a) $Y$ is rewritten using (stochastic) integration by parts, $ Y = TW_0+\int_0^T (T-t)dW_t.$ This gives $E[Y]=TW_0$ and $\mathrm{Var}(Y)=\int_0^T (T-t)^2dt=T^3/3$. 1b) $E[X] = W_0$ and $\mathrm{Var}(X)=T$ by standard properties of BM. 1c) $\mathrm{Cov}(X,Y) = \mathrm{Cov}(\int_0^T dW_t,\int_0^T (T-t)dW_t) = \int_0^T (T-t)dt =T^2/2$. Calculate all the parameters of the regression (\ref{LSQ}) using formulae (\ref{sigZ}-\ref{alpha}), starting with (\ref{beta}): 2a) $\beta = T^2/(2T^2)=1/2$, $\alpha = TW_0 - W_0/2$. 2b) $\sigma^2_Z = T^3/3-T^2/4$. 2c) From (\ref{Y|X}) $Y\mid X \sim N(TW_0 + (W_T-W_0)/2, T^3/3-T^2/4)$. Finally put everything together in (\ref{MGF}): 3a) $E[\exp(Y)\mid X] =\exp(TW_0 + (W_T-W_0)/2 + T^3/6-T^2/8)$.<|endoftext|> TITLE: What kind of mathematical model is this? QUESTION [6 upvotes]: I am making a highly simplified economic model. When I started with it, I assumed it would become a model of ordinary delayed differential equations, but it turns out to be something different, which I don't know how to categorize or analyse. (In the end I'd like to know if there can be cycles or multiple equilibria in the model). The model is as follows There are 2 sectors: the Electricity Sector, and the Coal Sector. They have a circular relation (coal is used to produce electricity, and electricity is used to produce coal). Electricity production $e(t)$ has an immediate effect on coal production $c(t)$, but coal has to be transported to the electricity plant and has a delay $\Delta$. Coal can be stored near the electric plant, and $s(t)$ is the amount stored at $t$, but electricity cannot be stored. coal production has a maximum capacity of $\kappa$ coal per unit $t$, at which rate it consumes $\alpha$ units of electricity per unit $t$. Coal is absorbed at rate $a(t)$, depending on coal available and electricity need, which produces $\beta$ electricity per unit coal. This means $\alpha/ \beta$ is the max needed coal absorption per $t$ to satisfy maximum coal production. This gives the following relations: $c(t)=min(\kappa,\frac{\kappa}{\alpha}e(t))$ $\dot s(t)=c(t-\Delta)-a(t)$ $a(t)=\{\begin{matrix}\ if \quad s(t)>0:min(\frac{\alpha}{\beta}, a^{max}) \qquad \\ if \quad s(t) = 0:min(\frac{\alpha}{\beta},c(t-\Delta)) \end{matrix}$ $e(t)=\beta \cdot a(t)$ Now my question is as follows: The only differential equation here is the one about the change in stored coal over time $\dot s$. Yet, The variables I am primarily interested in are coal and electricity production $c(t), e(t)$. I could simply substitute the electricity and coal functions into the $\dot s$ relation, and treat it as a delayed ordinary differential equation of one variable, but then I would only be analyzing fluctuations in stored coal, which (I think) doesn't allow me to analyse the change in electricity and coal production over time. But $c(t)$, and $e(t)$ are not in differential equation form. So how do I analyse $c(t)$ and $e(t)$ here (find out whether there are cycles, multiple equilibria, or compute numerical examples, e.g. I'd like to compute examples in mathematica), and more generally, what kind of category of dynamical system is this? Edit: I just realized that when I said "I could simply substitute the electricity and coal functions into the $\dot s$ relation", I was wrong, because it actually seems to be impossible to substitute $c$ and $e$ into $\dot s$, without getting into an infinite regress, where you have to refer to $c(t-\Delta), c(t-2\Delta), c(t-3\Delta), ...$. This makes my question even more pertinent. REPLY [4 votes]: I think you've written down a very interesting system, from the mathematical point of view! I would eliminate $e$ and write the system as \begin{align} c(t) =& \kappa\min (1, \tfrac{\beta}{\alpha} a(t)), \tag{1a}\\ a(t) =& \left\{ \begin{array}{rcl} \min(\frac{\alpha}{\beta}, a^\text{max}) & \text{if} & s(t) > 0,\\ \min(\frac{\alpha}{\beta}, c(t-\Delta)) & \text{if} & s(t) = 0 \end{array}\right. ,\tag{1b}\\ \frac{\text{d} s}{\text{d} t} =& c(t-\Delta) - a(t).\tag{1c} \end{align} I would call this a delay-differential-algebraic piecewise smooth system, if that helps at all. As you've found out, you cannot reduce it to delay-differential equation for $s$, because you would have to have knowledge of $c$ at every previous time $t-n \Delta$. The analysis of the existence of equilibria is the easiest, so let's start with that. Assume there exist $(c_*,a_*,s_*)$ that solve our system and are time-independent. The ODE for $s$, equation (1c), then implies that \begin{equation} c_* = a_*. \end{equation} From equation (1a) it follows that either $c_* = \kappa$ or $\kappa \frac{\beta}{\alpha} = 1$, which is a condition on the parameters. Assuming the most general situation, i.e. where the system parameters are chosen freely, we focus on the case $c_* = \kappa = a_*$. Note that this implies that $\frac{\beta}{\alpha} a_* = \frac{\beta}{\alpha} \kappa > 1$. We can now use equation (1b) to see if multiple choices for $s_*$ are possible. Suppose $s_* = 0$. Then, (1b) is satisfied if $c_* = \kappa > \frac{\alpha}{\beta}$ or $\kappa = \frac{\alpha}{\beta}$, which is the same parameter condition we encountered in the analysis of equation (1a). So, without assuming any such relation on parameters, we obtain Result 1: If the model parameters satisfy the inequality $\kappa > \frac{\alpha}{\beta}$, then $(c,a,s) = (\kappa,\kappa,0)$ is an equilibrium of the system. Now suppose $s_* > 0$. Then, (1b) is satisfied if $\kappa = \frac{\alpha}{\beta}$ or $\kappa = a^\text{max}$. Both are conditions on parameters, which are not satisfied in general if you're allowed to choose your system parameters freely. Therefore, we can state Result 2: If and only if $a^\text{max} = \kappa$, then $(c,a,s) = (\kappa,\kappa,s_*)$ is an equilibrium of the system, for any choice of $s_*>0$. What about this special case $\kappa = \frac{\alpha}{\beta}$? Let's look at equation (1a) first. Assume that $\frac{\beta}{\alpha} a_* > 1$, then $c_* = \kappa$. But that implies that $a_* = \kappa = \frac{\alpha}{\beta}$, which violates our assumption. Assuming that $\frac{\beta}{\alpha} a_* < 1$ leads to a contradiction in the same way, and therefore we again get $c_* = \kappa = a_*$. Equation (1b) is again satisfied if we choose $s_* = 0$. However, we cal also satisfy equation (1b) for $s_* > 0$ if we choose $a^\text{max} < \frac{\alpha}{\beta} = \kappa$. This leads to Result 3: Suppose that $\kappa = \frac{\alpha}{\beta}$. If $a^\text{max} < \kappa$, then $(c,a,s) = (\kappa,\kappa,s_*)$ is an equilibrium of the system, for any choice of $s_*>0$. To conclude: For a region in parameter space (where $\kappa > \frac{\alpha}{\beta}$) there exists a unique equilibrium solution for the system. I hope that's in any way helpful. By the way, you can reduce system (1) to an integro-delay equation for $a$. Writing \begin{align} s(t) =& s(0) + \int_0^t c(\tau-\Delta)-a(\tau)\,\text{d}\tau\\ =& s(0) + \int_0^t \kappa \,\min(1,\tfrac{\beta}{\alpha}a(\tau-\Delta))-a(\tau)\,\text{d}\tau, \end{align} you end up with the equation \begin{equation} a(t) = \left\{ \begin{array}{rcl} \min(\frac{\alpha}{\beta}, a^\text{max}) & \text{if} & s(0) + \int_0^t \kappa \,\min(1,\tfrac{\beta}{\alpha}a(\tau-\Delta))-a(\tau)\,\text{d}\tau > 0,\\ \min(\frac{\alpha}{\beta}, \kappa \,\min(1,\tfrac{\beta}{\alpha}a(t-\Delta))) & \text{if} & s(0) + \int_0^t \kappa \,\min(1,\tfrac{\beta}{\alpha}a(\tau-\Delta))-a(\tau)\,\text{d}\tau = 0 \end{array}\right. . \end{equation}<|endoftext|> TITLE: Simple way to calculate $n! \pmod p$ QUESTION [5 upvotes]: I have the exercise "Calculate $10! \pmod{13}$". I have the following two approaches to solve the exercise: Brute force approach $$ 1! \equiv 1 \pmod{13} \\ 2! = 2 \cdot 1! \equiv 2 \cdot 1 \equiv 2 \pmod{13} \\ 3! = 3 \cdot 2! \equiv 3 \cdot 2 \equiv 6 \pmod{13} \\ \cdots \\ 10! = 10 \cdot 9! \equiv 10 \cdot 11 = 110 = 8 \cdot 13 + 6 \equiv 6 \pmod{13} $$ Approach using Wilson's theorem: Wilson's theorem states that $$p \in \mathbb{P} \implies (p-1)! \equiv -1 \pmod p$$ For my exercise: $$13 \in \mathbb{P} \implies \\ (13-1)! = 12! = 10!\cdot 11 \cdot 12 \equiv -1 \pmod{13} \implies \\ 10! \equiv -(11 \cdot 12)^{-1} \pmod{13} $$ Using Fermat's little theorem $$ a^p \equiv a \pmod p \implies a^{p-2} \cdot a \cdot a \equiv a^{-1} \cdot a \cdot a \pmod p \implies a^{p-2} \equiv a^{-1} \pmod p \\ $$ For my exercise: $$10! = -(11 \cdot 12)^{-1} \equiv \\ -(11 \cdot 12)^{13-2} = -(11 \cdot 12)^{11} \equiv \\ -(-2 \cdot -1)^{11} = -2^{11} = \\ -2^6 \cdot 2^5 \equiv 1 \cdot 2^5 = \\ 32 \equiv 6 \pmod{13} \\ $$ Both approaches look quite bulky. In the first method I have to make $O(n)$ multiplications. In the second method I have to make $O(p-n)$ multiplications which is smaller than in the first method, but also can be huge number for big $p$ and $n$. Is there a way to improve my solution? Is there an efficient way to calculate $n! \pmod p$ for big $n$ and $p$? REPLY [6 votes]: Start with your Wilson's Theorem approach but finish off differently. Note that $12\equiv -1$ and $11\equiv -2 \pmod{13}$ and that these two numbers have easy inverses $(-2)(-7) \equiv 1$ and $(-1)(-1) \equiv 1 \pmod{13}$, so $$10! \equiv -(11)^{-1}(12)^{-1} \equiv -(-2)^{-1}(-1)^{-1} \equiv -(-7)(-1) \equiv 6 \pmod{13}.$$<|endoftext|> TITLE: Find all functions $f(x)$ such that $f\left(x^2+f(y)\right)$=$y+(f(x))^2$ QUESTION [6 upvotes]: Let $\mathbb R$ denote the set of all real numbers. Find all function $f: R\to \ R$ such that $$f\left(x^2+f(y)\right)=y+(f(x))^2$$ It is the problem. I tried to it by putting many at the place of $x$ and $y$ but I can't proceed. Please somebody help me. REPLY [2 votes]: Here is a short approach without Cauchy equation. Starting from $f(0)=0$ by @Leo163, we obtain $f(f(x))=x$ by plugging $y=0$. This shows $f$ is a bijection. Now plug $y=f^{-1}(\zeta)$, we see $$f(x^2+\zeta)=f^{-1}(\zeta)+(f(x))^2\geq f^{-1}(\zeta)=f(\zeta)$$ Hence $f$ is monotone increasing. Now suppose for some $x_0$ we have $f(x_0)> x_0$, then $x_0=f(f(x_0))\geq f(x_0)> x_0$, a contradiction. Thus we have $f(x)\leq x$. Similarly we have $f(x)\geq x$, so $f(x)=x$.<|endoftext|> TITLE: How to check if you are counting right? QUESTION [24 upvotes]: Combinatorics is "hard" even at the elementary level in that verifying answers becomes extremely tricky. While in algebra while a solution to an equation such as $$x^2+4x+3=0$$ is desired, the solution can be obtained and can be checked by plugging the solution by plugging the answer back into the equation. Consider for instance, this problem. Find the number of ways to create $5$ groups of exactly two among $10$ people such that no person belongs to two groups. An incorrect solution: First select a group of two people from the $10$ people. There are ${10 \choose 2}$ of doing this. The next group can be selected in ${8 \choose 2}$ ways and so on. So by the multiplication principle, the total number of ways equal to the value of the following product. $${10 \choose 2} \times {8 \choose 2} \times {6 \choose 2} \times {4 \choose 2}$$ The idea in the next (incorrect) solution is to create a bijection between the number of permutations of the people standing in a line and the number of groups that can be formed. Though this is the same idea that one possible correct solution uses the groups are overcounted. Incorrect solution-2: There are $10!$ ways in which the $10$ people can stand in a line. The first and the second person, the third and the fourth and so on are placed in a single group. Since switching the first and second position will give another permutation but the same groups we need to divide by $2$. Similarly switching the third and the fourth position will give a different permutation but the same groups. Therefore ultimately, the total number of ways to form the $5$ groups will be $$\frac{10!}{2^{5}}$$ The solution which yields the correct answer is the following Correct Solution-2: There are $10!$ ways in which the $10$ people can stand in a line. The first and the second person, the third and the fourth and so on are placed in a single group. Since switching the first and second position will give another permutation but the same groups, we need to divide by $2$. Similarly switching the third and the fourth position will give a different permutation but the same groups. Also consider the following fact. If the people (indicated by letters) are arranged in one permutation as follows, ABCDEFGHIJ then the permutation CDABIJGHEF also correspond to the same set of groups which is why there is a need to divide by $5!$ as there are $5!$ ways to Therefore ultimately, the total number of ways to form the $5$ groups will be $$\frac{10!}{2^{5}\times5!}$$ Questions concerning the correctness of a counting procedure adopted are quite common on this forum. ( I will add some links if you think it is necessary.) This is one reason this question,albeit subjective, has been asked.Also i have not come across any text which addresses how not to undercount or overcount. Any discussion of techniques on how to avoid undercounting/overcounting or how to check the enumeration will be highly appreciated. Note: The problems that I mean when i say combinatorics problems are enumerative combinatorics problems. This will perhaps narrow the scope of discussion by a considerable extent. Though counting in two ways is sometimes a nice way to check whether the solution obtained is right, it takes a lot of ingenuity to enumerate the answer to every problem in two ways. Which is why any answer to this question can avoid that technique and also not include listing out all the possibilities. REPLY [3 votes]: This reminds me of this question I answered recently. Yes, validating your answer is not easy when we have combinatorics or probability questions. There is no generic method one can "mindlessly" follow. Monte Carlo simulation or brute force computing can help, provided you have understood the question correctly and your model/program is correct. Keep in mind that some questions are easier to validate with this method because they have a straightforward simulation/computation model, while others are more difficult because the model is not apparent or it is difficult to program (and then you worry about validating your program :)). For example, think about this question: Three people roll a pair of dice each. What's the probability that one person has rolled at least one $1$, while the other two don't have a die with $3$ or below? This would be relatively straightforward to model and simulate. You simulate many rolls of three pairs of dice and you check whether these conditions hold in each try. You count the times they held and divide them by the total number of tries. On the other hand, how would you simulate/compute the problem at hand (counting the pairings)? I can think of the following brute force computation approach: list all permutations of 10 people, each time grouping them into pairs (1st and 2nd is one pair, 3rd and 4th is another, etc). If a permutation results in a configuration of pairings we have already encountered then we do not count it. It is certainly doable, but there is plenty of room for programming mistakes when trying to find if two pair groupings are the same. Moreover, if we had 100 people instead of 10, computing time would explode using this method. The takeaway message is that even though simulation or brute-force computation can help in some cases, they are not a cure-for-all. Trying your solution for small values can also help. But be careful. Sometimes wrong formulas give the right answer for small numbers. For our specific question, this is not the case (the wrong formula produces wrong results even for small values) so we could have actually gotten some help if we tried this method. Let's say we have $4$ people instead of $10$. The first (i.e., incorrect) method would yield ${4 \choose 2} = 6$ but when we try to enumerate all possible pairings manually, we get only $3$ ways. Looking into this we can get an insight about how we are double-counting. But how do we know that there are only $3$ ways to pair $4$ items? What if we made a mistake in our manual counting? One way to feel more confident about our result is to notice that once we choose an item and we create all possible pairs with the 3 remaining items, then the last pair is 'forced' to be formed. This brings me to the crux of the validation issue. The most helpful thing is to think of different ways of solving the problem. In the question you link, this is what the OP has done. They have realised that they can view the problem like this: Choose one person and then you have $9$ possibilities to pair them with someone. This is our first pair. Then choose another person (does not matter who) and they have $7$ possibilities to be paired. And so on with the rest. So the possible pairings should be: $9\cdot 7\cdot 5\cdot 3 = 945$. In the question, it's not clear that they saw the conflict with the result from the first formula (it seems they did not compute the result from the first formula) but they could have easily done that. This could have given them some insight about the problem. Once you find a mismatch you can start asking why. It's possible that one solution method is more intuitive to you (for example the method I just described seems more intuitive to me than thinking about combinations), so this can guide you to find the mistake in the other way. In any case, you can attack the problem from different angles and think about what your solutions do in small size problems to get some clarity of what's going on. There is no guarantee that you will find an error, but attacking the problem from many angles is the best way I know. A great side effect of this process is that you gradually gain more insight in the mistakes you and other people do, and you become more confident about your solutions in future problems.<|endoftext|> TITLE: Newton vs Leibniz notation QUESTION [16 upvotes]: I have often come across the cursory remarks made here and there in calculus lectures , math documentaries or in calculus textbooks that Leibniz's notation for calculus is better off than that of Newton's and is thus more widely used. Though I have always followed Leibniz's notation( matter of familiarity, as that's what I have been taught) , but of late I had the idea of following Newton's notation just to see where I could get stuck just because of "notational" issues. Is there any limitation of Newton's notation that I might encounter while doing calculus ; and which may make it seem a bad idea to do calculus in Newton's notation? Here "Leibniz notation" is $\frac{dy}{dx}$ for the derivative of $y$, and "Newton's notation" is $\dot{y}$ for the derivative of $y$. REPLY [5 votes]: Wikipedia has a dedicated page on notations for differentiation, in short: Leibniz $\frac{dx}{dt}$ Newton $\dot{x}$ Lagrange $x'(t)$ Leibniz's notation is suggestive, thanks to the cancelling of the differentials in the chain rule: $$ \frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt} $$ however great care must be taken, as this notation can also be misleading for higher order derivatives: $$ \frac{d^2y}{dt^2}=\frac{d^2y}{dx^2}\frac{dx^2}{dt^2}=\frac{d^2y}{dx^2}\left(\frac{dx}{dt}\right)^2 $$ which is wrong, the right formula is: $$ \frac{d^2y}{dt^2}=\frac{d^2y}{dx^2}\left(\frac{dx}{dt}\right)^2+\frac{dy}{dx}\frac{d^2x}{dt^2} $$ You have not this problem with Lagrange's notation: $$ y(x(t))''=(y'(x(t))x'(t))'=y''(x(t))(x'(t))^2+y'(x(t))x''(t) $$ These notation problems are well known when teaching differential calculus, see: H. Poincaré, La Notation Différentielle et l'enseignement (pdf) J. Hadamard, La notion de différentielle dans l'enseignement (pdf) unfortunately both in French, however you can find an English translation of Hadamard's article here. You can also see: Differentials, higher-order differentials and the derivative in the Leibnizian calculus (pdf)<|endoftext|> TITLE: Integral Equation: $\frac{1}{\lambda(y)} = c_1 \int_0^\infty \lambda(x) \exp(-c_2 y x) \, dx$ QUESTION [5 upvotes]: As presented in the question's title, I wish to find a function $\lambda(\cdot): [0, \infty) \to [0, \infty)$ which satisfies the integral equation: \begin{equation} \frac{1}{\lambda(y)} = c_1 \int_0^\infty \lambda(x) \exp(-c_2 y x) \, dx \end{equation} where $c_1$ and $c_2$ are positive constants. Unfortunately, I have no clear idea about how to systematically tackle this question. Any help is greatly appreciated! REPLY [3 votes]: Let me show that no such function exists. We argue this by contradiction. Assume that there is a function $\lambda : [0, \infty) \to [0, \infty)$ satisfying $$ \frac{1}{\lambda(s)} = c_1 \int_{0}^{\infty} \lambda(x) e^{-c_2 sx} \, dx \quad \forall s \geq 0 \tag{*}$$ with the convention that $1/0 = \infty$. Then Step 1. In this step, we normalize $\lambda$ and reveal some useful facts on it. Since the right-hand side of $\text{(*)}$ is decreasing, $\lambda$ is increasing. Since the left-hand side of $\text{(*)}$ is always positive (or possibly infinite), $\lambda$ cannot be identically zero. From these two properties, $\alpha$ defined by $$ \alpha := \inf \{ s > 0 : \lambda(s) > 0 \} $$ Is a non-zero real number such that $\lambda(s) = 0$ for all $s \in [0, \alpha)$. Moreover, if $\alpha > 0$ then the modified version $\tilde{\lambda}(s) = e^{-c_2 \alpha s}\lambda(s+\alpha)$ satisfies \begin{align*} \frac{1}{\tilde{\lambda}(s)} = \frac{e^{c_2 \alpha s}}{\lambda(s+\alpha)} &= c_1 e^{c_2 \alpha s} \int_{\alpha}^{\infty} \lambda(x) e^{-c_2 (s+\alpha) x} \, dx \\ &= c_1 e^{c_2 \alpha s} \int_{0}^{\infty} \lambda(x+\alpha) e^{-c_2 (s+\alpha)(x+\alpha)} \, dx \\ &= c_1 e^{-c_2 \alpha^2} \int_{0}^{\infty} \tilde{\lambda}(x) e^{-c_2 sx} \, dx. \end{align*} So by changing the value of $c_1$ if needed, we may assume that $\alpha = 0$. Then by @guestDiego's computation, we may further assume that $c_1 = c_2 = 1$ and we do so. If $\lambda(0) > 0$, then $$ \infty > \frac{1}{\lambda(0)} = \int_{0}^{\infty} \lambda(x) \, dx \geq \int_{0}^{\infty} \lambda(0) \, dx = \infty $$ and we get a contradiction. Thus $\lambda(0) = 0$. From the standard theory of Laplace transform, if $f \geq 0$ is measurable and $$\mathcal{L}\{f\}(s) := \int_{0}^{\infty} f(x)e^{-sx} \, dx$$ Is finite for all $s > 0$, then $\mathcal{L}\{f\}(s)$ converges for $\Re(s) > 0$ and defines an analytic function on the same region. Moreover, differentiation can be computed by using Leibniz's integral rule: $$ \frac{d^n}{ds^n} \mathcal{L}\{f\}(s) = (-1)^n \int_{0}^{\infty} x^n f(x) e^{-sx} \, dx. $$ Step 2. Now we are ready to establish a contradiction. First, we have $\lambda'(s) \geq 0$ because $\lambda$ is increasing. Then by the Tonelli's theorem, \begin{align*} \frac{s}{\lambda(s)} &= \int_{0}^{\infty} \lambda(t) s e^{-st} \, dt \\ &= \int_{0}^{\infty} \bigg( \int_{0}^{t} \lambda'(x) \, dx \bigg) s e^{-st} \, dt \\ &= \int_{0}^{\infty} \lambda'(x) \bigg( \int_{x}^{\infty} s e^{-st} \, dt \bigg) dx \\ &= \int_{0}^{\infty} \lambda'(x) e^{-sx} \, dx. \end{align*} Taking log-differentiation to both sides, we get $$ \frac{\lambda'(s)}{\lambda(s)} = \frac{\int_{0}^{\infty} x \lambda'(x) e^{-sx} \, dx}{\int_{0}^{\infty} \lambda'(x) e^{-sx} \, dx} + \frac{1}{s}. \tag{1}$$ This is our key ingredient toward a contradiction. Using this, we inductively prove that Claim. For any $n = 1, 2, 3, \cdots$ and $s > 0$, we have $\frac{\lambda'(s)}{\lambda(s)} \geq \frac{n}{s}$. The base case $n = 1$ is straightforward from $\text{(1)}$ since the ratio between two integra in the RHS of $\text{(1)}$ is non-negative. Next, assuming the claim for $n$, we have $$ \frac{\lambda'(s)}{\lambda(s)} \geq \frac{\int_{0}^{\infty} n \lambda(x) e^{-sx} \, dx}{\int_{0}^{\infty} \lambda'(x) e^{-sx} \, dx} + \frac{1}{s} = \frac{n/\lambda(s)}{s/\lambda(s)} + \frac{1}{s} = \frac{n+1}{s}. $$ Then the claim follows from mathematical induction. Now the contradiction is obvious: $\lambda'(s)/\lambda(s)$ is finite for $s > 0$ while $n/s$ can be arbitrary large! Therefore no such $\lambda$ can exist.<|endoftext|> TITLE: Multiplying two logarithms QUESTION [6 upvotes]: I've searched for some answer already, but couldn't find any solution to this problem. Apparently, there's no rule for the product of two logarithms. How would I then find the exact solution of this problem? $$ \log(x) = \log(100x) \, \log(2) $$ REPLY [7 votes]: How would I then find the exact solution of this problem? Manipulate the equation to isolate $x$. \begin{align*} \log(x) &= (\log(100)+\log(x))\log(2) \\ \log(x) &=\log(100)\log(2)+\log(x)\log(2)\\ \log(x)-\log(x)\log(2)&=\log(100)\log(2)\\ \log(x)(1-\log(2))&=\log(100)\log(2) \\ \log(x)&=\log(100)\log(2)/(1-\log(2))\\ \end{align*} Then resolve $x$ with whatever base your logarithm is using. E.g. with base 10, $$x\approx7.267$$<|endoftext|> TITLE: What is the infinite dimensional counterpart of the Lie derivative? QUESTION [6 upvotes]: In a finite dimensional space, one calculates the Lie derivative as $L_f(g)(x) = \langle \nabla g, f \rangle$ What is the equivalent in an infinite dimensional space? For example if $g$ takes as argument a function and $f$ is an infinite dimensional vector, how does one think about the Lie derivative? I am familiar with the Gateau derivative, so does one simply replace the gradient with this? Then we might have $L_f(g)(x) = \langle dg, f \rangle$ for some appropriate inner product? for instance $L^2$? REPLY [4 votes]: You might find what you're looking for in one of the following links: Lang: http://www.springer.com/gp/book/9780387985930 Kriegl & Michor: http://bookstore.ams.org/surv-53 Choquet-Bruhat & DeWitt-Morette: https://www.elsevier.com/books/analysis-manifolds-and-physics-revised-edition/choquet-bruhat/978-0-444-86017-0 Omori: http://bookstore.ams.org/mmono-158/ Basically, consider looking for material on infinite dimensional manifolds.<|endoftext|> TITLE: Finite morphism is stable under base change. QUESTION [7 upvotes]: In the scheme theory, there is a notion 'finite morphism'. I find the fact that a finite morphism is stable under base change in the Wikipedia. (Link: https://en.wikipedia.org/wiki/Finite_morphism) But I cannot prove or find why. How can I prove it? REPLY [7 votes]: This is [EGAII, Prop. 6.1.5(iii)], although I'm sure you can find a better reference. Proposition. If $f\colon X \to S$ is a finite morphism, then the morphism $f' \colon X \times_S S' \to S'$ is finite for all base extensions $g\colon S' \to S$. Proof. Cover $S$ with open affines $V_i = \operatorname{Spec} B_i$. By the definition of a finite morphism, we have $f^{-1}(V_i) = \operatorname{Spec} A_i$, where each $A_i$ is finitely generated as modules over $B_i$. The sets $g^{-1}(V_i)$ form an open cover for $S'$, and we can cover these $g^{-1}(V_i)$ with open affines $V'_{ij} = \operatorname{Spec} B'_{ij}$. We claim that the preimages $$f^{\prime-1}(V'_{ij}) = X \times_S V'_{ij} \cong f^{-1}(V_i) \times_{V_i} V'_{ij}$$ of the $V'_{ij}$ are affine, with coordinate rings $A'_{ij}$ that are finitely generated as modules over $B'_{ij}$. Note that the isomorphism above is by the construction of the fiber product; see [Hartshorne, Thm. 3.3, Step 7]. First, affinity of $f^{\prime-1}(V'_{ij})$ follows since $$f^{\prime-1}(V'_{ij}) \cong f^{-1}(V_i) \times_{V_i} V'_{ij} \cong \operatorname{Spec}(A_i \otimes_{B_i} B'_{ij})$$ Now defining $A'_{ij} := A_i \otimes_{B_i} B'_{ij}$, we want to show that $A'_{ij}$ is finitely generated as a module over $B'_{ij}$. Since $A_i$ is finitely generated as a module over $B_i$, we have surjections $$B_i^{\oplus n_i} \twoheadrightarrow A_i$$ for each $i$. Now applying $-\otimes_{B_i} B'_{ij}$, by the right-exactness of the tensor product, we have surjections $$B_{ij}^{\prime\oplus n_i} \twoheadrightarrow A_i \otimes_{B_i} B'_{ij} =: A'_{ij}$$ for each $i,j$. Thus, $A'_{ij}$ is a $B'_{ij}$-algebra, which is finitely generated as a module over $B'_{ij}$. $\blacksquare$<|endoftext|> TITLE: $f$-related vector field QUESTION [5 upvotes]: Let $M$ and $N$ be smooth manifods and $f:M\to N$ a smooth submersion. Prove that for every vector field $X\in \mathfrak{X}(N)$ there is a vector field $Y\in \mathfrak{X}(M)$ such that $X$ and $Y$ are $f$-related (i.e., $f_{*}Y=X\circ f $). Here is where I'm at: take $p\in M$, then $X(f(p))\in T_{f(p)}N$. Since $f$ is a submersion, $\exists v\in T_pM$ such that $f_{*_{p}}(v)=X(f(p))$. In that fashion, we can define a function $Y:M\to TM$ with $Y(p)=v$. I suppose this is the natural way to start. The problem is that this $v$ is not uniquely defined, which means $Y$ may well not be smooth depending on the choices for $v$, so I'm stuck here. Any suggestions? Thanks! REPLY [6 votes]: Let $p \in M$. Due to the normal form of smooth submersions, there are smooth chart $(U_p, y_p)$ of $M$, $(V_{f(p)}, x_{f(p)})$ of $N$, containing $p$ and $f(p)$, respectively, such that: $$ x_{f(p)}\circ f \circ (y_p)^{-1}(z_1, \ldots, z_m) = (z_1, \ldots, z_n)$$ Then, it isn't hard to prove that for $i \in \left \{ 1, \ldots, n\right \}$ $$\mathrm{d}f_{p}\left ( \dfrac{\partial }{\partial y^i}\bigg|_{p} \right ) = \dfrac{\partial }{\partial x^i}\bigg|_{f(p)}$$ I'm not going to write down the subindex $p$ on the components of the charts, to not oversaturate notation. On the other hand, since $X$ is a vector field, it has a representation in $V_{f(p)}$ of the form $$ X = \sum\limits_{i=1}^{n}g_i^p \dfrac{\partial }{\partial x^i}\bigg|_{f(p)}$$ Afterwards, just define the vector field $Y^p:U_p \to TM$ by $$Y^p = \sum\limits_{i=1}^{n}\left ( g_i^p \circ f\right ) \dfrac{\partial }{\partial y^i}\bigg|_{p}$$ It is straightforward that $Y^p$ is a smooth vector field and $$\mathrm{d}f_p\left ( Y^p(q)\right ) = X(f(q)) \quad \forall q \in U_p$$ Therefore, we have constructed a family of local vector fields that satisfies the required condition. Now, consider a partition of unity $\left \{ \xi_p\right \}_{p \in M}$ subordinate to $\left \{ U_p\right\}_{p \in M}$ and define $$ Y = \sum\limits_{p \in M}\xi_p Y^p $$ Finally, $Y$ is a global smooth vector field that is $f$-related to $X$.<|endoftext|> TITLE: Can we use mathematical induction when induction basis is 'too' broad? QUESTION [6 upvotes]: Imagine I wanted to use the induction principle to prove that a certain proposition was true for diagonal matrices. The induction is done in the dimension of the matrix. So, the induction basis is for $n=1$. However, at this basis, there is no difference between a diagonal matrix and a non-diagonal matrix. So, how can we be sure that the reason it's right is due to being a diagonal matrix, and not because it's non-diagonal matrix. Shouldn't the basis be $n=2$? Any help would be appreciated. REPLY [10 votes]: The induction principle you are trying to use is: Suppose that $P(1)$ is true and that if $P(n)$ is true then $P(n+1)$ is true. Then $P(n)$ is true for all $n$. In this case, $P(n)$ is the statement For all diagonal matrices of dimension $n$, [...something...] You are confused about showing that $P(1)$ is true, because every matrix of dimension $1$ is diagonal. The good news is that you needn't be worried about this - if you show that the statement is true for all $1\times 1$ matrices, then you've proved $P(1)$. You worry that: So, how can we be sure that the reason it's right is due to being a diagonal matrix, and not because it's non-diagonal matrix But you needn't worry about this. The principle of induction says nothing about the reason that something is true. In your case, it turns out that all $1\times 1$ matrices have your property, but once you switch to $2\times 2$ matrices, only the diagonal matrices have that property. But you've proved that if all $n\times n$ diagonal matrices have the property, then all $(n+1)\times(n+1)$ diagonal matrices have that property. So you can get from case $1$ to case $2$ as follows: All $1\times1$ matrices have the property $\Rightarrow$ All diagonal $1\times 1$ matrices have the property (as you know, this is actually equivalent) $\Rightarrow$ All diagonal $2\times2$ matrices have the property (using the induction rule) $\Rightarrow$ All diagonal $3\times3$ matrices have the property (using the induction rule again) $\Rightarrow$ and so on<|endoftext|> TITLE: $n$-th root of $3 \times 3$ invertible matrix QUESTION [5 upvotes]: Yo, I couldn't solve this exercise after thinking for a while. For every $A \in GL_{3} (\mathbb{C})$ and $n$, there's a $B \in Mat_{3, 3}(\mathbb{C})$ such that $B^n = A$ The previous exercise was that for every nilpotent $N \in Mat_{3, 3} (\mathbb{C})$ and every $n$, $C = 1 + \frac{1}{n}N + \frac{1-n}{2n^2}N^2$ satisfies $C^n = 1 + N$, so I suppose there's a trick using this result. I tried to play a little with the splitting of of $A$ as a nilpotent plus a semisimple, however I couldn't get anything useful. Thanks in advance. REPLY [2 votes]: If you write $A = D + N$ as semisimple + nilpotent (where $D$ and $N$ commute), then $D$ is invertible and $$A = D(I + D^{-1}N),$$ where $D^{-1}N$ is nilpotent (because $D$ and $N$ commute). Now $D$ has a $n$th root (because we are in $\mathbb{C}$ so it's diagonalizable), and so does $I + D^{-1}N$ by the previous exercise. The product of these two $n$th roots is your desired $B$ (because they commute). More generally, for matrices of any size, you can put $B = \exp(\tfrac1n \log A)$. Here, $\exp$ is defined by the usual power series, and $\log A$ is any matrix such that $\exp(\log A) = A$. If $A$ is invertible then this exists. Indeed, with $A = D(I + D^{-1}N)$ as above, then $\log D$ exists (clearly we can take the logarithm of any invertible diagonal matrix), and for the other factor we can use the power series $$\log (I+X) = \sum_{k=1}^\infty \frac{(-1)^{k-1}}{k} X^k$$ for $X = D^{-1}N$. The power series converges whenever the spectral radius of $X$ is $< 1$; in particular, it converges (after finitely many terms) when $X$ is nilpotent. Then $$\log A = \log D + \sum_{k=1}^\infty \frac{(-1)^{k-1}}{k} (D^{-1}N)^k.$$ It's worth showing that the $C$ from your previous exercise is just $C = \exp(\tfrac1n \log(I+N))$. Actually, it's even easier to derive $C$ using the binomial series $$ (I+X)^\alpha = \sum_{k=0}^\infty \binom{\alpha}{k} X^k. $$<|endoftext|> TITLE: Probability that a number is divisible by 11 QUESTION [22 upvotes]: The digits $1, 2, \cdots, 9$ are written in random order to form a nine digit number. Then, the probability that the number is divisible by $11$ is $\ldots$ I know the condition for divisibility by $11$ but I couldn't guess how to apply it here. Please help me in this regard. Thanks. REPLY [3 votes]: The rule of divisibility by $11$ is as follows: The difference between the sum of digits at odd places and the sum of the digits at even places should be $0$ or a multiple of $11$. We also know that the sum of all the digits will be $45$ as $1 + 2 + ... + 9 = 45$. Let $x$ denote sum of digits at even position s and $y$ denote sum of digits at odd places, or vice versa. Case 1 (difference is $0$): $$x + y = 45$$ $$x - y = 0$$ Thus, $2x = 45$, or $x = 22.5$ which cannot be obtained. Case 2 (difference is $11$): $$x + y = 45$$ $$x - y = 11$$ Thus, $2x = 56$, or $x = 28$ and $y = 17$. This is a valid possibility. Case 3 (difference is $22$): $$x + y = 45$$ $$x - y = 22$$ Thus, $2x = 67$, or $x = 33.5$, which cannot be obtained. As you can see, the difference between the sum of the digits at odd places and the sum of the digits at even places must be $11$. Now, imagine that there are $9$ placeholders (representing the $9$ digits of the $9$-digit number). Either the sum of the digits at odd places ($5$ odd places) should be $28$, or the sum of the digits at even places ($4$ even places) should be $28$. We write down the possibilities: $2$ ways to express $28$ as a sum of $4$ numbers between $1$ and $9$. $9$ ways to express $28$ as a sum of $5$ numbers between $1$ and $9$. In the first case, there are $4!$ ways of arranging the $4$ numbers (that add up to $28$) and $5!$ ways of arranging the $5$ other numbers (that add up to $17$). Hence, no. of ways$ = 2 * 4! * 5!$ In the second case, there are $5!$ ways of arranging the $5$ numbers ( that add up to $28$) and $4!$ ways of arranging the $4$ other numbers (that add up to $17$). Hence, no. of ways$ = 9 * 5! * 4!$ Total favourable possibilities$$= 2 * 4! * 5! + 9 * 5! * 4!$$ $$= 4! * 5! * (2 + 9)$$ $$= 4! * 5! * 11$$ Also, total no. of ways of arranging $9$ numbers to form a $9$-digit number = $9!$ Hence, probability$=P= (4! * 5! * 11)/9!$ $$= 11/126$$<|endoftext|> TITLE: Covariance zero for two gaussian variables QUESTION [5 upvotes]: Say we have two random variables $X$ and $Y$ and both of them have a gaussian distribution. Further, we know that $cov(X,Y) = 0$, where $cov(X,Y)$ is the covariance of two variables (i.e $cov(X,Y) = E[(X-E[X])(Y-E[Y])]$, where $E[X]$ is the mean (expectation) of variable $X$. Can we say that $X$ and $Y$ are independent variables? I know that, in general, $cov(X,Y)= 0 $ does not imply that $X$ and $Y$ are independent, but what about the case when $X$ and $Y$ have a gaussian(normal) distribution? Can we take this as a theorem? REPLY [2 votes]: For jointly (per @Did) normal random variables, uncorrelated implies independent. In particular, it is easy to see that the joint density function factors, giving the product of the two marginal density functions. Also, for normal data, the sample mean $\bar X$ and sample SD $S$ are independent. (Proof via linear algebra or moment generating functions.) But $\bar X$ and $S$ are not independent except for normal data. In the left panel below $S$ is plotted against $\bar X$ for 30,000 randomly generated standard normal datasets of size $n = 5.$ As a 'naturally occurring' instance where zero correlation and dependence coexist: in the right panel the same is done for 30,000 samples of size $n = 5$ from $Beta(.5, .5).$ For these beta data $\bar X$ and $S$ are uncorrelated, but not independent. m = 30000; n = 5 x = rnorm(m*n); NRM = matrix(x, nrow=m) ax = rowMeans(NRM); sx = apply(NRM, 1, sd) cor(ax, sx) ## -0.001177232 # consistent with uncorrelated y = rbeta(m*n, .5, .5); BTA = matrix(y, nrow=m) ay = rowMeans(BTA); sy = apply(BTA, 1, sd) cor(ay, sy) ## -0.001677063 # consistent with uncorrelated<|endoftext|> TITLE: Is "square inversion" possible? QUESTION [6 upvotes]: So, there exists in geometry circle inversion: Can I perform a similar "inversion" technique through a square? What would, for example, a square look like when inverted through another square? REPLY [6 votes]: When you ask, "Can I perform a similar 'inversion' through a square?", the answer is "yes", in the sense that you can define "inversion in a square" however you like! The issue is not whether the definition articulates some kind of Platonic, a priori aspect of inversion, it's whether the definition is useful for categorizing and investigating phenomena. The definition implicit in your figure does not seem to have a natural generalization to a square, but inversion in a circle has another characterization: If $C$ is a circle of center $O$ and radius $r > 0$, and if $P \neq O$ is a point, the image $P'$ of $P$ under inversion in $C$ is the point lying on the ray $OP$ and satisfying $|OP|\, |OP'| = r^{2}$. You might therefore proceed like so: If $O$ is a point, and if $C$ is a curve that intersects each ray through $O$ exactly once, and if $P \neq O$, then we might define $P'$, the image of $P$ under inversion in $C$, to be the point lying on the ray $OP$ and satisfying $|OP|\, |OP'| = |OC|^{2}$, with $OC$ denoting the distance from $O$ to $C$ along the ray $OP$. In polar coordinates centered at $O$, if $C$ is the polar graph $r = f(\theta)$ for some $2\pi$-periodic function $f$, then the point $P$ with polar coordinates $(R, \theta)$ maps to the point $P'$ with polar coordinates $(f(\theta)^{2}/R, \theta)$. Here, for example, is the image (blue) of a circle under inversion in a square in the preceding sense, taking $O$ to be the center of the square:<|endoftext|> TITLE: non-constant entire function $f$ such that $f(n+\dfrac{1}{n})=0\forall n\in \Bbb N$? QUESTION [5 upvotes]: Does there exist a non-constant entire function $f : \mathbb{C}\to\mathbb{C}$ such that $f(n+\dfrac{1}{n})=0$ for all $n\in \Bbb N$? Let $f$ be a non-constant entire function such that $f(n+\dfrac{1}{n})=0\forall n\in \Bbb N$. Then $f(2)=0;f(3+\frac{1}{3})=0$ and so on.But the problem is the set of zeros of $f$ does not have a limit point. How can I conclude whether such a function exists or not?Please help REPLY [6 votes]: There exists such a function. An infinite product such as $$f(z) = \prod_{n =1}^\infty \left(1-\frac{z^2}{(n+1/n)^2}\right)$$ determines a nonconstant entire function of $z$ with zeroes at $z= \pm (n+1/n)$.<|endoftext|> TITLE: Number of polynomials of degree less than 4 satisfying 5 points QUESTION [11 upvotes]: Let polynomial $P(x)$ have the property that $P(1),$ $P(2),$ $P(3),$ $P(4)$ and $P(5)$ are equal to $1$, $2$, $3$, $4$, $5$ in some order. How many possibilities are there for the polynomial $P,$ given that the degree of $P$ is strictly less than $4$? REPLY [18 votes]: By Lagrange interpolation, given the values of $P(1)$, $P(2)$, $P(3)$, $P(4)$, and $P(5)$, there is a unique polynomial of degree $\leq 4$ taking those values. So the question is, for which permutations of the numbers $1$ through $5$ will the resulting $P(x)$ actually have degree $<4$? To determine this, we can look at the actual explicit formula for Lagrange interpolation and see what the coefficient of $x^4$ will be in terms of our five values. That formula is $$\begin{align*}P(x)=P(1)\frac{(x-2)(x-3)(x-4)(x-5)}{(1-2)(1-3)(1-4)(1-5)}&+P(2)\frac{(x-1)(x-3)(x-4)(x-5)}{(2-1)(2-3)(2-4)(2-5)}\\ &+P(3)\frac{(x-1)(x-2)(x-4)(x-5)}{(3-1)(3-2)(3-4)(3-5)}\\ &+P(4)\frac{(x-1)(x-2)(x-3)(x-5)}{(4-1)(4-2)(4-3)(4-5)}\\ &+P(5)\frac{(x-1)(x-2)(x-3)(x-4)}{(5-1)(5-2)(5-3)(5-4)}, \end{align*}$$ so the coefficient of $x^4$ will be $$\frac{P(1)}{24}-\frac{P(2)}{6}+\frac{P(3)}{4}-\frac{P(4)}{6}+\frac{P(5)}{24}=\frac{P(1)-4P(2)+6P(3)-4P(4)+P(5)}{24}.$$ So we get a polynomial of degree $<4$ iff $$P(1)+6P(3)+P(5)=4(P(2)+P(4)).$$ Now we just have some casework to consider. If $P(3)=1$, the RHS will be at least $4(2+3)=20$ and the LHS is at most $5+6+4=15$, so there are no solutions. Since the problem is symmetric with respect to conjugating by $x\mapsto 6-x$, there are no solutions with $P(3)= 5$ either. If $P(3)=2$, then $P(1)+P(5)$ is divisible by $4$, which means $P(1)$ and $P(5)$ must be $1$ and $3$ (in some order) or $3$ and $5$ (in some order). The equation will hold in the second case but not the first case. This gives four different solutions, since you can swap the two values of $P(1)$ and $P(5)$, and also the two values of $P(2)$ and $P(4)$. Again, by symmetry there are four more solutions if $P(3)=4$. Finally, suppose $P(3)=3$. Then $P(1)+P(5)$ must be $2$ mod $4$, so $P(1)$ and $P(5)$ must be $1$ and $5$ (in some order) or $2$ and $4$ (in some order). Both cases work, and each gives four solutions. Thus there are eight solutions total if $P(3)=3$. Taking all the cases together, then, we find there are sixteen different solutions. Some closing remarks: It's not a coincidence that the coefficients of the values of $P$ in the equation we got are binomial coefficients; you can see this by noticing that the denominator in the Lagrange interpolation term with $P(n)$ is $\pm(n-1)!(5-n)!$ (grouping the positive and negative factors together), and this generalizes if you replace $5$ by another positive integer. Still, it would be nice to have a more conceptual explanation for why we're getting binomial coefficients. It would also be nice to have a more conceptual explanation for how to count the solutions (in particular, something that would generalize if you replaced $5$ by any positive integer).<|endoftext|> TITLE: The real line is not homeomorphic to any non-trivial product space QUESTION [5 upvotes]: I came across a question that interested me recently. It asked the following: Prove that if $\mathbb R$ is homeomorphic to $X \times Y$, then $X$ or $Y$ is a singleton set. I have an easy proof using path-connectedness. I was interested if there is an even more elementary argument. The notion of connectedness had not yet been introduced in the text. REPLY [2 votes]: If $X$ and $Y$ are connected topological spaces, each containing at least two points, then the product space $X\times Y$ has no cut point. Proof. Consider any point $(a,b)\in X\times Y;$ I have to show that $X\times Y\setminus\{(a,b)\}$ is connected. Choose $x_0\in X\setminus\{a\}$ and $y_0\in Y\setminus\{b\}.$ Now consider any point $(x,y)\ne(a,b);$ I will show that $(x,y)$ and $(x_0,y_0)$ are in the same component of $X\times Y\setminus\{(a,b)\}.$ Case I. If $x\ne a$ then $(\{x\}\times Y)\cup(X\times\{y_0\})$ is a connected subset of $X\times Y\setminus\{(a,b)\}$ containing $(x,y)$ and $(x_0,y_0).$ Case II. If $y\ne b$ then $(X\times\{y\})\cup(\{x_0\}\times Y)$ is a connected subset of $X\times Y\setminus\{(a,b)\}$ containing $(x,y)$ and $(x_0,y_0).$<|endoftext|> TITLE: If $\{a,b,c,d,e\}\subset[0,1]$ so $\sum\limits_{cyc}\frac{1}{1+a+b}\leq\frac{5}{1+2\sqrt[5]{abcde}}$ QUESTION [16 upvotes]: Let $\{a,b,c,d,e\}\subset[0,1]$. Prove that: $$\frac{1}{1+a+b}+\frac{1}{1+b+c}+\frac{1}{1+c+d}+\frac{1}{1+d+e}+\frac{1}{1+e+a}\leq\frac{5}{1+2\sqrt[5]{abcde}}$$ I tried C-S, convexity and more, but without success. REPLY [4 votes]: Here are two cases. Case 1: (cyclically) ${a b} < \frac14$ . Observe $a+b \ge 2 \sqrt{a b}$ by AM-GM, likewise for the other terms. So it is enough to prove $$ \sum_{cyc} \frac{1}{1 +2 \sqrt{a b}} \leq\frac{5}{1+2\sqrt[5]{abcde}} $$ Let $2 \sqrt{a b} = x$, $2 \sqrt{b c} = y$, etc. Then we need $$ \frac 15 \, \sum_{cyc} \frac{1}{1 +x} \leq\frac{1}{1+ \sqrt[5]{\prod_{cyc}x}} $$ Consider the function $$ f(z) = \frac{1}{1 +e^z} $$ We have $$ f''(z) = \frac{e^z (e^z -1)}{(1+e^z)^3} $$ so for $z < 0$ we have that $f''(z) < 0$ and hence $f(z)$ is strictly concave. Hence by Jensen, $$ \frac 15 \, \sum_{cyc} f(z_i) \leq f(\frac 15 \, \sum_{cyc} z_i) $$ Now apply $e^{z_i} = x$ cyclically. This establishes the inequality for $z < 0$, i.e. when all $x = 2 \sqrt{ab} < 1$. Case 2: (cyclically) $a + b \leq 1$ . I am grateful for a comment by Hugh Denoncourt which lead to this case. Define a vector $(w,z)$. Consider the function $$ f(w,z) = - \frac{1}{1 +e^z + e^w} $$ which is the negative of the function under consideration, so we are looking for convexity. We have the second partial derivative: $$ \frac{\partial^2 f(w,z)}{\partial w^2} = \frac{e^w (1 +e^z - e^w)}{(1+e^z+e^w)^3} $$ and for $z$ likewise. We also have the Hessian of this function: $$ H = \frac{e^z e^w (1 -e^z - e^w)}{(1+e^z+e^w)^5} $$ so for $e^z + e^w < 1$ we have that $H > 0$ and hence $f(w,z)$ is strictly convex. Hence by Jensen's inequality for vector-valued functions, $$ \frac 15 \, \sum_{cyc} f(w_i,z_i) \geq f(\frac 15 \, \sum_{cyc} (w_i,z_i)) $$ Now apply $e^{z_i} = a$ and $e^{w_i} = b$ cyclically. This establishes the inequality for all $a+b<1$. However, this will not solve the case fully for $0 \leq a,b \leq 1$. Comment: If it were not the Hessian, but positivity of the second partial derivatives, this would always be given for $0 \leq a,b \leq 1$. Do we need the Hessian?<|endoftext|> TITLE: Limit $\lim_{n\to\infty}\sum_{k=0}^n\binom nk\frac{3k}{2^n(n+3k)}$ QUESTION [6 upvotes]: Is there a closed-form solution for this combinatorial limit of a sum? $$\lim_{n\to\infty}\sum_{k=0}^n\binom nk\frac{3k}{2^n(n+3k)}$$ I tried hypergeometric series and failed. REPLY [8 votes]: Let $X_1, X_2, \ldots $ be iid Bernoulli random variables with success probability $1/2$. Then by Strong Law of Large Numbers $$\bar{X}_n:=\frac{1}n\sum_{i=1}^n X_i \stackrel{a.s.}{\to} \frac12$$ Also for any continuous function $g$ we have $g(\bar{X}_n) \stackrel{a.s.}{\to} g(1/2)$. Now take $g(x)=\frac{3x}{1+3x}$. $|g(\bar{X}_n)| \le 1$ for all $n$. Then by DCT we have $$E(g(\bar{X}_n)) \to g(1/2)=\frac35$$ Let $Y=n\bar{X}_n$. $Y\sim \operatorname{Bin}(n,1/2)$. Finally note that $$E(g(\bar{X}_n))=E\left(\frac{3\bar{X}_n}{1+3\bar{X}_n}\right)=E\left(\frac{3Y}{n+3Y}\right)=\frac1{2^n}\sum_{i=1}^n \binom{n}{k}\frac{3k}{n+3k}$$ This shows that the limit is indeed $\frac35$. Similar question: Proof/derivation of $\lim\limits_{n\to\infty}{\frac1{2^n}\sum\limits_{k=0}^n\binom{n}{k}\frac{an+bk}{cn+dk}}\stackrel?=\frac{2a+b}{2c+d}$?<|endoftext|> TITLE: Why does $\frac{|x|}{x^2}$ reduce to $\frac{1}{|x|}$? QUESTION [6 upvotes]: This simplification confused me: $$.....=\frac{|x|}{x^2} = \frac{1}{|x|}$$ I get cancelling a degree of x, but why must you introduce the abs. val sign in the denominator? Is it because the left side is guarateed to be positive, so you must retain that in the final expression ? REPLY [3 votes]: $\left|x\right|=x$ if $x>0$ and $\left|x\right|=-x$ if $x<0$ if $x>0$ we will have $\frac{x}{x^2}=\frac{1}{x}$ if $x<0$ we will have $\frac{-x}{x^2}=-\frac{1}{x}$ So, $\frac{\left|x\right|}{x^2}=\frac{1}{\left|x\right|}$.<|endoftext|> TITLE: Can two integer polynomials touch in an irrational point? QUESTION [35 upvotes]: We define an integer polynomial as polynomial that has only integer coefficients. Here I am only interested in polynomials in two variables. Example: $P = 5x^4 + 7 x^3y^4 + 4y$ Note that each polynomial P defines a curve by considering the set of points where it evaluates to zero. We will speak about this curve. Example: The circle can be described by $x^2 + y^2 -1 = 0$ We say two polynomials $P,Q$ are touching in point $(a,b)$ if $P(a,b) = Q(a,b) = 0$ and the tangent at $(a,b)$ is the same. Or more geometrically, the curves of $P$ and $Q$ are not crossing. (The Figure was created with IPE - drawing editor.) We also need a further technical condition. For this let $D$ be a ''small enough'' disk around $(a,b)$. Then $Q$ and $P$ define two regions indicated green and yellow. Those regions must be interior disjoint. Without this condition for $P = y-x^3$ and $Q=y$ the point $(0,0)$ would be a touching point as well. See also the right side of the figure. (I know that I am not totally precise here, but I don't want to be too formal, so that I can reach a wide audience.) (Thanks for the comment from Jeppe Stig Nielsen.) Example: $P = y - x^2$ (Parabola) $Q = y$ ($x$-axis) They touch at the origin $(0,0)$. My question: Does there exist two integer polynomials $P,Q$ that touch in an irrational point $(a,b)$? (It would be fine for me if either $a$ or $b$ is irrational) Many thanks for answers and comments. Till REPLY [7 votes]: Here's a general way to find such examples where both curves are of the form $y=f(x)$. Notice that $y=f(x)$ and $y=g(x)$ meet at a given value of $x$ iff that value of $x$ is a root of the polynomial $h(x)=f(x)-g(x)$, and they have the same tangent line iff that value is a root of $h(x)$ of multiplicity greater than $1$. So this means that to find an example, we just need a polynomial $h(x)$ with integer coefficients that has a double root at some irrational value of $x$ (we can then take $g(x)$ to be any polynomial with integer coefficients at all, and $f(x)=h(x)+g(x)$). This is easy to do: just take any polynomial $p(x)$ with integer coefficients and an irrational root, and let $h(x)=p(x)^2$.<|endoftext|> TITLE: Infinite sum of logs puzzle QUESTION [8 upvotes]: Here is a neat infinite sum puzzle: Prove the following is true when $|x|<1$ $$-\ln(1-x)=\ln(1+x)+\ln(1+x^2)+\ln(1+x^4)\dots\ln(1+x^{2^k})\dots\\-\ln(1-x)=\sum_{k=0}^\infty\ln(1+x^{2^k})$$ Hope you all enjoy! HINT: The answer is probably simpler than you might think. REPLY [8 votes]: Exponentiate both sides to get $\frac{1}{1-x} = (1+ x)(1 + x^2)$... By uniqueness of the binary representation of a nonnegative integer we find that the $x^n$ coefficient on the right side is $1$ for all $n$, so equality follows by power series representation of the left side.<|endoftext|> TITLE: Increasing $g$ where $g' = 0$ a.e. but $g$ not constant on any open interval? QUESTION [6 upvotes]: As the question title suggests, does there exist an increasing function $g$ such that $g' = 0$ almost everywhere but $g$ isn't constant on any open interval? REPLY [2 votes]: Yes, Let $\phi(x)$ be Cantor-Lebesgue function for $[0,1]$ and continue it to a function on $\mathbb{R}$ by fixing it $1$ for $x>1$ and $0$ for $x<0$. Let $O_n = (a_n,b_n)$ be an enumeration of all open intervals in $\mathbb{R}$ such that the end-points are of rational value. Define $ \phi_n(x) = \phi(\frac{x-a_n}{b_n-a_n}) $ and define$$ g(x) = \sum_{n=1}^\infty \frac{1}{2^n}\phi_n(x)$$ Now, for us to differentiate $g$ we need to recall Fubini's theorem (which you can verify, holds). then we have $g'(x)= \sum_{n=1}^\infty \frac{1}{2^n}\phi_n(x)' = 0 \quad (\text{a.e})$ $g$ is stricly increasing since if $x>y$ then $\phi_n(x) \geq \phi_n(y)$ for all $n$. Moreover, there must exist some rational $y\phi_k(y)=0$. A stricly increasing function is not constant on any open interval.<|endoftext|> TITLE: $\mu$ absolutely continuous with regards to Lebesgue measure on $[0, 1]$? QUESTION [6 upvotes]: Say we have that $\mu$ is a measure on the Borel $\sigma$-algebra on $[0, 1]$ and for every $f$ that is real-valued and continuously differentiable we have$$\left| \int_0^1 f'(x)\,d\mu(x)\right| \le \sqrt{\int_0^1 f(x)^2\,dx}.$$Is $\mu$ absolutely continuous with respect to Lebesgue measure on $[0, 1]$? REPLY [5 votes]: Hint/idea: Let $0\le a < b\le 1.$ Define $f$ to be the piecewise linear function whose graph connects the points $(0,0), (a,0), (b,1),(1,1).$ This $f$ is not $C^1$ but I think it shows the way. We can think of $f'$ as equal to $0$ on $[0,a),$ equal to $1/(b-a)$ on $[a,b],$ and equal to $0$ on $(b,1].$ Then $$|\int_0^1 f'\, d\mu| = |\int_{[a,b]} f'\, d\mu| = |\frac{\mu([a,b])}{b-a}| \le (\int_0^1 f^2 )^{1/2} \le 1.$$ Thus $|\mu([a,b])| \le b-a.$ This is true for any $a,b$ and thus shows $\mu$ is AC with respect to Lebesgue measure.<|endoftext|> TITLE: Regulating $\int_0^\infty \sin x \, \mathrm{d} x$ QUESTION [20 upvotes]: The limit $$\lim_{a \to \infty} \int_0^a \sin x \, \mathrm{d} x$$ does not exist. However, consider that $$ \lim_{\epsilon \to 0} \int_0^\infty e^{- \epsilon x} \sin x \, \mathrm{d} x = 1 \,.$$ Here I have 'regulated' the integral. What I discovered, and what strikes me as very surprising, is that if, instead of an exponential, I choose a different function $f(x, \epsilon)$ which tends pointwise to $1$ as $\epsilon$ goes to $0$ and tends to $0$ as $x$ goes to $\infty$, I get convergence to exactly the same limit. So if I choose $$ f(x, \epsilon) = \frac{1}{1 + \epsilon x^2} \quad \text{or} \quad \mathrm{sech}^2(\epsilon x) \quad \text{or} \quad (1 + 2 \epsilon x^2) e^{-\epsilon x^2}\,,$$ Then the integral of $f(x, \epsilon) \sin x$ from 0 to $\infty$ tends to $1$ as $\epsilon$ tends to $0$. Why is this happening? EDIT: I was initially satisfied with the responses given, but on further thought I don't think I follow the logic of tired's answer, which invokes the stationary phase approximation. In particular, my understanding of the stationary phase approximation is that one looks for stationary points of the argument of the exponential since these correspond to the points where the oscillation is slowest – away from this point, the oscillations 'cancel out' because of how rapid they are. However, in this case the argument of the exponential has no stationary points. Further, whilst I can appreciate that (in the case where there are no stationary points) a 'boundary maximum point' would dominate the integral in the real case (that is, for which the argument of the exponential is real), I can't see that this would be relevant in the imaginary case. I am looking for an answer that includes the following three points: A proof that the limit of this integral is independent of regulator (for a suitable class of regulator). Some intuition as to why we might expect this particular integral to be independent of regulator. Information on whether there is some general theory about assigning, perhaps uniquely, a value to non-convergent integrals. In particular, I would like to know whether the fact that the integral at the top of this question is 'almost convergent' (in the sense that it is bounded for all $a$) makes it easier to unambiguously regulate. REPLY [3 votes]: Suppose $f\in C^1([0,\infty),$ $\lim_{x\to \infty} f(x) = 0,$ and both $f,f' \in L^1([0,\infty)).$ Then $$\tag 1 \lim_{\epsilon\to 0} \int_0^\infty f(\epsilon x) \sin x\, dx = f(0).$$ Proof: Integrating by parts shows the integral equals $$f(\epsilon x)(-\cos x)\big |_0^\infty + \int_0^\infty \epsilon f'(\epsilon x) \cos x\, dx = f(0) + \int_0^\infty f'(y) \cos (y/\epsilon)\, dy.$$ As $\epsilon \to 0,$ the last integral $\to 0$ by the Riemann-Lebesgue lemma. This gives $(1).$ Remarks: 1. In the cases $f(x) = e^{-x}, 1/(1+x^2), \mathrm {sech}^2 x, (1+2x^2)e^{-x^2},$ $(1)$ gives the results you mentioned (although with $\sqrt \epsilon$ in place of $\epsilon$ for the second and fourth functions). The hypothesis $\lim_{x\to \infty} f(x) = 0$ is not really needed, because it's implied by the other hypotheses. This is because $f'\in L^1$ implies $f$ is uniformly continuous, and a uniformly continuous function in $L^1([0,\infty))$ must vanish at $\infty.$ I included the $f(x)\to 0$ hypothesis to keep the proof simple. $(1)$ can be generalized: Keep the hypotheses on $f$ as above, and assume $G\in C^1([0,\infty)),$ with both $G,G'$ bounded. Then $$\tag 2 \lim_{\epsilon\to 0} \int_0^\infty f(\epsilon x) G'(x)\, dx = -G(0)f(0).$$ For example, taking $G(x) = - \cos x$ gives $(1).$ The proof of $(2)$ is very much like that of $(1);$ we need an analogue of Riemann-Lebesgue, but that's straightforward.<|endoftext|> TITLE: (Somewhat) generalised mean value theorem QUESTION [11 upvotes]: Problem. Let $f:\Bbb R\to\Bbb R$ be a continuous map. Let $n$ be a non-negative integer. Then show that there is $00$. REPLY [7 votes]: Generally you have the following integral mean value theorem. Theorem If $f$ and $g$ are integrable functions with $f$ continuous and $g$ not changing sign, then hen there is some $c \in [a,b]$ such that $$ \int_a^b f(x) g(x) dx = f(c) \int_a^b g(x) dx.$$ Once you know the statement, it's quite straightfoward to prove. If you let $\gamma = \int_a^b g(x) dx$ and let $m,M$ be the min,max that $f$ achieves on $[a,b]$ (respectively), then $$ m\gamma \leq \int_a^b f(x) g(x) dx \leq M\gamma,$$ or rather $$ m \leq \frac{\int_a^b f(x) g(x) dx}{\gamma} \leq M.$$ By the intermediate value theorem, $f$ takes every value from $m$ to $M$ within $[a,b]$, and so there is some $c \in [a,b]$ such that $$ f(c) = \frac{\int_a^b f(x) g(x) dx}{\gamma}.$$ Rearranging gives the theorem [except when $\gamma = 0$ --- but that's a straightforward exercise that I leave aside]. $\diamondsuit$ This applies to your problem by taking $f = f$ and $g(x) = (1-x)^n$ in the theorem above. Then in particular $$ \int_0^1 g(x) dx = \int_0^1 (1-x)^n dx = \frac{1}{n+1}.$$ This concludes the proof. $\spadesuit$<|endoftext|> TITLE: Automorphism of genus $6$ plane curves QUESTION [6 upvotes]: Let $C\subset \mathbf{P}^2$ be a genus $6$ smooth plane curve, $\sigma\colon C\to C$ be an automorphism, is $\sigma$ necessarily induced from an automorphism of $\mathbf{P}^2$? REPLY [2 votes]: I think the following argument works over $\mathbb{C}$. Let $\phi \in \text{Aut}(C)$. Then $\phi^* \mathcal{O}(1) \cong \mathcal{O}(1)$. This follows from M. Noether theorem: a smooth plane curve of degree $d$ has no $g^1_k$ with $k TITLE: What's the difference between continuous and piecewise continuous functions? QUESTION [5 upvotes]: A continuous function is a function where the limit exists everywhere, and the function at those points is defined to be the same as the limit. I was looking at the image of a piecewise continuous function on the following page: http://tutorial.math.lamar.edu/Classes/DE/LaplaceDefinition.aspx But the image of the function they've presented isn't continuous. As such, I'm confused by what a piecewise continuous function is and the difference between it and a normal continuous function. I'd appreciate it if someone could explain the difference between a continuous function and a piecewise continuous function. Also, please reference the image of the piecewise continuous function presented on this page http://tutorial.math.lamar.edu/Classes/DE/LaplaceDefinition.aspx . Thank you. REPLY [2 votes]: $\newcommand{\R}{\mathbb{R}}$The notion of piecewise continuity (PWC) is used differently in different contexts. a Often, a function $f:\R\to\R$ is called PWC if it continuous everywhere, but at a finite number of points. In the context of Laplace transform and other integral transforms, a function $f$ is said to be PWC if it is continuous on a partition of intervals of its domain and at the boundaries of the intervals the function has well-defined and finite limits. Definition. [PWC] A function $f:[a,b]\to\R$ is called piecewise continuous (PWC) if there exist $a = x_0 < x_1 < \ldots < x_n = b$ so that $f$ is continuous on $(x_k, x_{k+1})$ for all $k=0,\ldots, n-1$ The limits $\lim_{x\to{}x_{k+1}^{-}}f(x)$ and $\lim_{x\to{}x_{k}^{+}}f(x)$ exist and are finite for all $k=0,\ldots, n-1$ According to this definition, function $$ f(x) = \begin{cases} 0, &\text{ for } x = 0 \\ \frac{1}{x}, &\text{ for } x{}>{}0 \end{cases} $$ defined over $[0, \infty)$, is not PWC according to the second definition although it has only one point of discontinuity. Additionally, function $f(x)=\tfrac{1}{x}$, $x\in\R\setminus\{0\}$, is not PWC, again because the limits $\lim_{x\to 0^+}f(x)$ and $\lim_{x\to 0^-}f(x)$ are not finite.<|endoftext|> TITLE: Is there a Lucas-Lehmer equivalent test for primes of the form ${3^p-1 \over 2}$? QUESTION [7 upvotes]: I'm reviewing the cyclotomic form $f_b(n)= {b^n-1 \over b-1}$ for various properties to extend an older treatize of mine on that form. With respect to primality there is the Lucas-Lehmer-test for primeness of $f_2(p)$ where of course $p$ itself must be a prime. I was now looking, whether I can say some things for primes of the form $f_3(p) = {3^p-1 \over 2}$ ,for instance $f_3(3)=13, f_3(7)=1093, f_3(13)=797161, ...$ (more terms see bottom). For this I was looking for a comparable test, similar to the scheme in the Lucas-Lehmer test. There is a short remark at Weisstein's mathworld involving the concept of Lucas-sequences for a generalized primality test (eq (2) to (4)), of which then the Lucas-Lehmer-test is only a special case, but I could not decode the formulae & recipes into some algorithm. So Q: Is there a primality test for numbers of the form $f_3(p) = {3^p-1 \over 2}$ similar to the scheme in the Lucas-Lehmer test? More terms for $(3^{a(n)}-1)/2 \in \Bbb P$ a(n)=[3, 7, 13, 71, 103, 541, 1091, 1367, 1627, 4177, 9011, 9551, 36913, 43063, 49681, 57917, 483611, 877843 ] source: OEIS:A028491 REPLY [2 votes]: There is the basic idea behind the Lucas-Lehmer primality test. For $n= 2^p-1$ we can choose $d= 3$ (quadratic reciprocity) and $\alpha= 2+\sqrt{3}$ If $n$ is prime and $d$ is not a square $\bmod n$ then $$\mathbb{Z}/n\mathbb{Z}[\sqrt{d}] = \{ a+b \sqrt{d}, (a,b) \in \mathbb{Z}/n\mathbb{Z}\}$$ is a field with $n^2$ elements, and its multiplicative group is cyclic with $n^2-1$ elements. If we find an element $\alpha$ of multiplicative order $n+1$ (ie. $\alpha^{n+1} \equiv 1 \bmod n, \alpha^{(n+1)/p} \not\equiv 1 \bmod n $ for every prime divisors $p | n+1$) then $n$ is prime. (since otherwise with $q$ the least prime divisor of $n$ then $\mathbb{Z}/q\mathbb{Z}[\sqrt{d}]^\times$ is a group with less than $q^2-1 \le n-1$ elements, so the order of $\alpha\bmod q$ can't be $n+1$) Thus all we need is to know the prime divisors of $n+1$ and compute $\alpha^{(n+1)/p} \bmod n$ for many $\alpha$. The same idea works in $\mathbb{Z}/n\mathbb{Z}$ if we know the prime divisors of $n-1$, and in $\mathbb{Z}/n\mathbb{Z}[x]/(f(x))$ for some irreducible polynomial $f$ of degree $k$ if we know a large part of the factorization of $n^k-1$. $$\boxed{\ \ \text{Thus it doesn't work for }\ \frac{3^a-1}{2} \quad(\text{but it does for }2\cdot 3^a-1)\ \ }$$<|endoftext|> TITLE: Can $\log(1-U)-\log(U)+W$ be normally distributed, with $U$ uniform on $(0,1)$ and $W$ independent of $U$? QUESTION [12 upvotes]: Assume that $U$ and $V$ are independent random variables with values in $(0,1)$ and that $U$ is uniformly distributed. Can it happen that $$L=\log\left(\frac{(1-U)V}{U(1-V)}\right)$$ is normally distributed? As a motivation, note that $L$ is the log odds ratio of two binary random variables with Bernoulli distributions of random parameters $U$ and $V$, and that the question above arose from discussions here, where the suggestion that no such distribution of $V$ exists, was made. This can also be formulated in terms of PDFs or in terms of characteristic functions. First, computing the PDF of $\log((1-U)/U)$, one arrives at the equivalent formulation: In terms of PDFs: Consider some random variable $X$ with PDF $$f_X(x)=\frac{e^x}{(e^x+1)^2}$$ on the real line, does there exist any random variable $Y$ independent of $X$ such that $$Z=X+Y$$ is normally distributed? Finally, the characteristic function of $X$ is $$\varphi_X(t)=E(e^{itX})=\frac{\pi t}{\sinh(\pi t)}$$ hence one is also asking the following: In terms of characteristic functions: Determine if there exists any positive $v$ such that $g_v$ is a characteristic function, where $$g_v(t)=\frac{\sinh(t)}t\,e^{-vt^2}$$ Expansions at $t=0$ show that $g_v$ can be a characteristic function only if $v\geqslant\frac16$. REPLY [10 votes]: Consider some solution $Z=X+Y$, then the identity $$e^Z=e^X\cdot e^Y$$ involves only positive random variables hence the independence of $(X,Y)$ implies the identity in $(0,+\infty]$ that $$E(e^Z)=E(e^X)\cdot E(e^Y)$$ Now, $c=E(e^Z)$ is finite since $Z$ is normal, and $E(e^X)=+\infty$ because $e^xf_X(x)\to1$ when $x\to+\infty$ hence $x\mapsto e^xf_X(x)$ is not integrable on the real line. But the equation $$c=+\infty\cdot b$$ has no solution $b$ in $(0,+\infty]$, hence there is no random variable $Y$ independent of $X$ such that $Z=X+Y$ is normal. This approach really proves a more general result: Consider two distributions $\mu$ and $\nu$ such that $\int_\mathbb Re^xd\mu(x)$ is infinite and $\int_\mathbb Re^xd\nu(x)$ is finite. Then if $P_X=\mu$, there exists no $Y$ independent of $X$ such that $P_{X+Y}=\nu$.<|endoftext|> TITLE: What is the probability that $\min\limits_{i}\max\limits_{j} M_{ij}\gt \max\limits_{j}\min\limits_{i} M_{ij}$ QUESTION [8 upvotes]: Assume you have a $n\times n$ matrix $M$, each entry is filled with a number from $1$ to $n^2$ randomly, and no two entries are the same. There are $n$ rows, select the max number of each row, so there are $n$ numbers. $A$ is defined as the minimum number of these $n$ numbers. To clarify: $$ A:= \min_{i}\max_{j} M_{ij}\\ B:= \max_{j}\min_{i} M_{ij}. $$ What is $\Pr[A>B]$? Edit 1: The computer run has the following result: $$ 0.332877, 0.698953, 0.886191, 0.960409, 0.986796, 0.995996, 0.99876, 0.999604, 0.999892 $$ This is from $n=2$ to $n=10$ Edit 2: More hint: Computer check for $\Pr[A\ge B]$ $$ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 $$ Code: import numpy as np N = 1000000 ratio = [] for n in range(2,11): count = 0 for j in range(N): m = np.random.permutation(n**2).reshape(n,n) a = min([max(m[i,:]) for i in range(n)]) b = max([min(m[:,i]) for i in range(n)]) if(a>b): count += 1 ratio.append(count/N) print(ratio) REPLY [12 votes]: As already noted in the comments, a possible initial approach to this problem is the following. Let us suppose that, after selecting the maximal number in each row, the minimum number $A$ is that in the $j^{th} $ row. Also, let us suppose that, after selecting the minimum number in each column, the maximal number $B$ is that in the $i^{th} $ column. Now consider the number $x_{i,j} $ corresponding to the crossing point of the $i^{th} $ column and $j^{th} $ row. We directly get that $A \geq x_{i,j} \geq B $. Thus, the searched probability that $A >B $ is equal to $1-Pr[ A = x_{i,j} = B] $, where this second term expresses the probability that both the two procedures finally identify the same number in the matrix. We can now continue as follows. First, note that the condition that both two procedures finally identify the same number in the matrix implies that there exists a number $x_{i,j} $ in the matrix which is the highest in its row, and the lowest in its column. Also note that, if such a number exists, then it must be unique. To show this, let us assume that there exists another number $x_{r,s} $ (corresponding to the crossing point of the $r^{th} $ column and $s^{th} $ row) that - just like $x_{i,j} $ - is the highest in its row, and the lowest in its column. This would imply $$x_{i,j} > x_{r,j} > x_{r,s} > x_{r,j} > x_{i,j} $$ which is clearly impossible. Therefore, we have that $Pr[ A = x_{i,j} = B]$ is equivalent to the probability that there exists a number $x_{i,j} $ in the matrix which is the highest in its row, and the lowest in its column. We can now try to calculate this probability. The probability $P_C(k)$ that, after dividing the first $n^2$ integers in $n $ random groups of $n $ elements (i.e., the columns), a given number $k$ of this set is the lowest in its group is given by $$P_C(k)= \frac {n^2-k}{n^2-1} \cdot \frac{n^2-k-1}{n^2-2}.... \cdot \frac {n^2-k-n+2} {n^2-n+1} $$ where the sequence of fractions expresses the probability that, given $k $, then the first, the second... and the $(n-1)^{th}$ among the other numbers in its group/column are all $>k $. This probability formula is valid only for $k \leq n^2-n+1$ (for higher values of $k $ the probability is zero). The expression above can also be written as $$ P_C(k)= \frac {(n^2-k)!}{(n^2-k-n+1)!} \cdot \frac {(n^2-1)!}{ (n^2-n)!} =\frac {\binom{n^2-k}{n-1}} {\binom{n^2-1}{ n-1}} $$ By similar considerations, we can calculate the probability $ P_R(k) $ that, after dividing the first $n^2$ integers in other $n $ random groups of $n $ elements (i.e., the rows), our given number $k$ is the highest in its group. Because this time we have to exclude, from the possible other terms that can compare in this group, the $n-1$ terms already considered in the same column of $k $ (it is clear that these terms cannot compare also in the same row of $k $), and taking into account that all these $n-1$ terms are $>k $, we obtain $$P_R(k) = \frac {k-1}{n^2-n} \cdot \frac{k-2}{n^2-n-1}.... \cdot \frac {k-n+1} {n^2-2n+2} $$ where again the sequence of fractions expresses the probability that, given $k $, then the first, the second... and the $(n-1)^{th}$ among the other numbers in its group/column are all $B] = 1- Pr[A= x_{i,j}=B] $$ $$=1- \sum _{k=n}^{n^2-n+1} \frac {\binom{n^2-k}{ n-1}} {\binom{n^2-1}{ n-1} } \frac {\binom{k-1}{ n-1}} {\binom{n^2-n}{ n-1}} $$ Note that, for the case $n=2$, this expression reduces to the cases $k=2$ and $k=3$. Since the summation gives $1/3$ in both cases, the final result for the case $n=2$ is $$Pr [A>B]=1-1/3-1/3=1/3$$ as anticipated in the comments. The cases $n=4$ and $n=5$ give $$1-3/10=7/10$$ and $$1-4/35=31/35$$ respectively, which are very near to the experimental values reported in the OP. The probability rapidly grows up and tends to $1$. For example, for $n=10$ its value is $$1-5/46189 \approx 0.99989... $$ again confirming the experimental value. To get a closed value, we can transform the binomial coefficients using factorials and simplify, so that the final result becomes $$Pr [A >B] =1- \frac {2n \, (n!)^2}{(2n!)}$$ The rapid increase of the function and its first values for small $n $ are shown by WA here.<|endoftext|> TITLE: Is there a formula for the expansion coefficients of powers of an inner product? QUESTION [11 upvotes]: I would like to expand the following expression $$\left(\sum_{i,j=1}^N \,x_i A_{ij} x_j\right)^n$$ where $\mathbf A$ is a symmetric $N\times N$ matrix, $\mathbf {x}$ is an $N$-component vector, and $n$ is a non-negative integer power. The expansion of this expression yields a homogeneous polynomial of order $2n$ in the $x_k$. What is the coefficient of the term $x_1^{p_1} x_2^{p_2} \cdots x_N^{p_N}$ for $p_1 + p_2 + \cdots + p_N = 2n$ in the expansion of this expression? Has the formula been worked out before? REPLY [2 votes]: Here is an answer using the symmetry of the inner product and based upon the multinomial theorem. From the multinomial theorem \begin{align*} \left(\sum_{i=1}^Nx_i\right)^n=\sum_{k_1+k_2+\cdots+k_N=n}\binom{n}{k_1,k_2,\ldots,k_N} \prod_{t=1}^{N}x_t^{k_t}\tag{1} \end{align*} we obtain \begin{align*} \left(\sum_{i,j=1}^Nx_iA_{i,j}x_j\right)^n&=\left(\sum_{i=1}^NA_{i,i}x_i^2+2\sum_{1\leq i TITLE: Closed orientable $n$-manifold $X$, there's a map $f: S^n \to X$ of nonzero degree, $n > 1$, is $\pi_1(X)$ finite? QUESTION [7 upvotes]: A closed orientable $n$-manifold $X$ satisfies $(*)$ if there is some map $f: S^n \to X$ of nonzero degree (i.e. for which the image of the generator of $H_n(S^n)$ is equal to a nonzero multiple of the generator of $H_n(X)$). If $X$ satisfies $(*)$ and $n > 1$, does it follow that $\pi_1(X)$ is finite? REPLY [5 votes]: Let $Y$ be the universal cover of $X$. Then since $n>1$ the map $f$ has a lift $\bar{f}$ to the universal cover $Y$. If $\pi_1(X)$ is infinite, then $Y$ would be non-compact. And $f=p\circ \bar{f}$ where $p$ be the universal covering map. And since the map $f$ is factored through $Y$, so $f_*:H_n(S^n)\to H_n(X)$ will be factored through $H_n(Y)$ which is $0$ (since $Y$ is non-compact, the $n-th$ homology of $Y$ is zero by the version of Poincare Duality of non-compact space). This implies $deg(f)=0$. Thus if $deg(f)\neq 0$, then $\pi_1(X)$ has to be finite.<|endoftext|> TITLE: Is it possible to cover a $8 \times8$ board with $2 \times 1$ pieces? QUESTION [8 upvotes]: We have a $8\times 8$ board, colored with two colors like a typical chessboard. Now, we remove two squares of different colour. Is it possible to cover the new board with two-color pieces (i.e. domino pieces)? I think we can, as after the removal of the two squares, we are left with $64-2=62$ squares with $31$ squares of each colour, and - since the domino piece covers two colours - we can cover the new board with domino pieces. But how should one justify it mathematically? REPLY [2 votes]: Completing an approach suggested by Thomas Andrews, if we can show that on the complete chessboard any proper subset of the white squares have more black neighbors than it has members, then Hall's marriage theorem will apply to the chessboard with two squares erased. Suppose therefore, that a proper subset of the white squares are given. Since the red and green lines in the following diagram connect all the white squares, there will be at least one red or green line that goes from a square in the subset to a square outside of it: Assume without loss of generality that there is a red line joining a square in the subset to a square outside the subset. (Otherwise mirror everything around the white diagonal). Now pair up each white square with the black neighbor it is connected to by a blue line in this diagram: This gives one neighboring black square for each white square. However the white square with a diagonal-partner that is not selected is additionally neighbor to the non-selected square's black partner which is not otherwise used. So, as desired, our set of white squares has more black neighbors in total than there are white squares in the set.<|endoftext|> TITLE: measure theory for dummies QUESTION [7 upvotes]: Is there a book with simplest examples one can ever imagine? For example: "Lets say we have "tree" "apple" "1" . . ." What is sigma algebra of this set, what is sigma algebra generated by something in this set, what is borel sigma algebra etc. It would be awesome if book covered main topics in measure theory that are important to probability and stochastic processes. Do such books exist? For Aduh: Books i have read on measure theory use notation like this below. I want author instead to explain intuition in simple way. REPLY [4 votes]: Every textbook on measure theory that I've looked at has plenty of simple examples of the kind you mention (not with trees and apples, but simple nonetheless). What do you find lacking in the texts you've read? Regarding your example $A:= \{\text{tree,apple,1} \}$, it's a mistake to ask what is the sigma algebra of this set (I hope I understand your question correctly here). There exist multiple sigma algebras of members of $A$: $\{ \emptyset, A \}$ is one, the power set of $A$ is another. Again, any of the standard references should make this very clear. When I started learning about Lebesgue measure and integration, I found Taylor's General Theory of Functions and Integration very helpful (and still do). It moves slowly and gives lots of examples. It also has a Dover edition and so is very affordable. If you're interested in an introductory text on measure-theoretic probability, I can recommend Rosenthal's A First Look at Rigorous Probability Theory. I would not consider this a textbook in measure theory proper, but it explains and makes use of the basic measure-theoretic concepts needed for probability and the exercises are not too difficult. Addendum. I will stick with my original recommendations in light of your edit. You should keep in mind that, in learning mathematics, part of your job as a reader is to think of intuitive explanations and simple examples of the new concepts that are introduced. That's how one learns. Reading math is an active affair; you have to struggle with examples and exercises until the concepts become familiar. No matter how clear and simple your author may be, you'll never learn math by just passively absorbing a textbook. REPLY [2 votes]: Personally, I do not know of a book that simple. With that being said, Terrence Tao's An Introduction to Measure Theory is quite approachable and readable as an introduction to Measure Theory, assuming you have the prerequisite background. More particularly, if you want simple examples, focus first on the Lebesgue Theory. It is more geometric and a bit less abstract, but it provides a firm base for the pursuit of abstract Measure Theory later.<|endoftext|> TITLE: Can Evans's proof for the theorem regarding global approximation of Sobolev functions be significantly simplified? QUESTION [5 upvotes]: Here $U$ is an open subset of $\mathbb{R}^n$. Above is a theorem regarding approximation of Sobolev functions in Evans's Partial Differential Equations. When I tried to the recover the proof on my own, I found that the proof might be much shorter than the one in the book. But it looks too simple to be true and I'm wondering if there is a big gap there. Here is my argument. Suppose $u\in W^{k,p}(U)$. Then $u\in L^p(U)$ by definition and thus $u\in L^1(U)$ since $U$ is bounded. Now according to the answer and comments to the following questions: convolutions and mollification of functions in $L^1_{\text{loc}}(\Omega)$ Properties of mollification for integrable functions one can define the mollification $u^\epsilon=\eta_\epsilon*u$ on $U$ such that $u_\epsilon\in C^\infty(U)$. Moreover, since $$ D^\alpha u^\epsilon=\eta_\epsilon*D^\alpha u \quad\textrm{in } U, $$ one has $$ \|u^\epsilon-u\|_{W^{k,p}(U)}^p=\sum \|D^\alpha u^\epsilon-D^\alpha u\|_{L^p(U)}^p\to 0. $$ Could anyone identify if there is any serious mistake in the above argument? [Added:]A possible naive analogy I make in the above argument is as the following. First of all, I have the following facts $D^\alpha u^\epsilon=\eta_\epsilon*D^\alpha$ in $\color{blue}{U_\epsilon}$, where the definition can be seen in the linked question. Also, $D^\alpha u^\epsilon\to D^\alpha u$ in $L^p(V)$ for any $V\Subset U$. Now that I can do mollification on the entire domain $U$ instead of just $U_\epsilon$, I just guess one might have $D^\alpha u^\epsilon=\eta_\epsilon*D^\alpha u$ in $\color{blue}{U}$. and $D^\alpha u^\epsilon\to D^\alpha u$ in $L^p(U)$. For the original "long" proof of THEOREM 2 by Evans, see this question. REPLY [4 votes]: NOTE. The numbering of equations is not progressive because of successive edits. I have left the original numbering so that comments stay understandable. If you perform the mollification in the brutal way $$ u^\epsilon(x)= \int_{\mathbb R^n} \eta_\epsilon(x-y)u(y)\mathbf 1_{y\in U}\, dy,\quad x\in U $$ that is, setting $u$ to be equal to $0$ outside $U$, then the last two properties mentioned in your question need not hold in the whole domain $U$. For a concrete example consider the space $W^{1,2}(-1, 1)$ and the constant function $u(x)=\mathbf 1_{|x|\le 1}.$ You have that $$\frac{du}{dx} \equiv 0\quad \text{in } U.$$ When you perform the brutal mollification you get $$\frac{d}{dx} \left( u\ast \eta_\epsilon\right)(x)= \eta_\epsilon(x+1) - \eta_\epsilon(x-1)\tag{2}$$ This already shows that the desired property $$\tag{!!!}\frac{d}{dx}\left( u\ast \eta_\epsilon\right) = \frac{du}{dx}\ast \eta_\epsilon\quad \text{in }U$$ needs not hold in $U$. Moreover, you can easily check that the $L^2(-1, 1)$ norm of (2) blows up to infinity as $\epsilon\downarrow 0$. This shows that the desired property $$\tag{!!!}\left \| \frac{d}{dx}u\ast\eta_\epsilon - \frac{du}{dx}\right\|_{L^2(U)}\to 0$$ needs not hold either. Note that if you exclude a small neighborhood of $\{-1, +1\}$, that is, if you consider $U_\delta= (-1+\delta, 1-\delta)$ instead of $U$, then both properties $(!!!)$ do hold (the first holds provided $\epsilon < \delta$). That's why Evans worries about taking mollifications an epsilon away from the boundary of $U$. Final remark. The distributional point of view provides some more insight. Note that, if you extend $u$ to be zero outside of $U$and consider the result as a distribution on $\mathbb R$, then its derivative is equal to $$\frac{ du}{dx} = \delta(x+1)-\delta(x-1).\tag{1}$$ The brutal extension produced two Dirac deltas in the first derivative. Those are responsible for the failure of properties $(!!!)$.<|endoftext|> TITLE: Given $K(\alpha)/K$ and $K(\beta)/K$ disjoint extensions with at least one of them odd degree then $K(\alpha,\beta)=K(\alpha\beta)$ QUESTION [5 upvotes]: I have problems with this exercise Let be $K(\alpha)/K$ and $K(\beta)/K$ disjoint extensions with at least one of them odd degree. Prove that $\alpha\beta$ is a primitive element for the extension $K(\alpha,\beta)/K$. Some of my ideas were Prove that $K(\alpha,\beta) \subset K(\alpha\beta)$ or that $K(\alpha) \subset K(\alpha\beta)$. Use that in this situation $K(\alpha)=K(\alpha^2)$. Tried to relate the irreducible polynomials from the extensions involved. I didn't find anything useful. Can you help me? Thank you in advance. REPLY [4 votes]: This is false - a counterexample is given by $ \alpha = \sqrt[3]{2} $, $ \beta = \sqrt[3]{3} $, $ K = \mathbf Q $. The fields $ \mathbf Q(\sqrt[3]{2}) $ and $ \mathbf Q(\sqrt[3]{3}) $ intersect trivially (left as an exercise), are both of degree $ 3 $ over $ \mathbf Q $, but $ \alpha \beta = \sqrt[3]{6} $ is of degree $ 3 $ over $ \mathbf Q $, so is not a primitive element of the extension $ \mathbf Q(\alpha, \beta)/\mathbf Q $, which is of degree $ 9 $.<|endoftext|> TITLE: If $f(x)=0 \implies f'(x)>0$, is the zero set of $f$ a single point? QUESTION [10 upvotes]: Let $f:\mathbb R \to \mathbb R$ be a real-valued differentiable function. Suppose that $f'(x)>0$ for every $x$ such that $f(x)=0$. Does it follow that the number of zeroes of $f$ is at most one? This sounds quite reasonable to me: it seems intuitive that if $x_1, x_2$ are two different zeroes of $f$, then a third zero with negative derivative should lie between them. I can't seem to adapt this argument to a solid proof though. Is there any way to prove (or disprove!) this fact in a quick fashion? REPLY [4 votes]: Suppose that $f$ has at least two zeros. Assume first that $a0$ then there is $x_1\in (a,b)$ such that $f(x_1)>0$. In a similar way, $f'(b)>0$ implies that there is $x_2\in (a,b)$ such that $f(x_2)<0$. Hence, by continuity, $f$ has zero in $(a,b)$. Contradiction. If there are no "consecutive" zeros, then between two distinct zeros there are infinite zeros. Here we can find a strictly monotone sequence of zeros $x_n$ which converges to some point $x_0$. By continuity $0=\lim_{n\to\infty}f(x_n)=f(x_0)$. Finally we get a contradiction $$0=\frac{f(x_n)-f(x_0)}{x_n-x_0}\to f'(x_0)>0.$$ REPLY [4 votes]: Assume $a < b$ and $f(a) = f(b) = 0.$ By the existence of the derivative at $a$ and its positivity, there is a small neighborhood $(a, a + \delta)$ on which $f(x) > 0.$ Define the set $C$ as the set of all $a < x < b$ such that $f(x) > 0.$ $C$ is not empty. On the other hand, there is a small neighborhood $(b - \varepsilon, b)$ on which $f(x) < 0.$ Therefore $\sup C \neq b.$ Note supremum is the same as the least upper bound. Original Latex has, among "log-like functions," $\sup$ but does not have LUB for least upper bound. I could make one with operatorname, $\operatorname{lub}$ Let $c = \sup C.$ We know $c < b.$ By continuity, $f(c) = 0.$ However, $f'(c) \leq 0$ since there are points $x$ with $f(x) > 0$ arbitrarily close to $c$ but $x 0.$ The limit that defines the derivative has $$ A = \lim_{t \rightarrow 0} \frac{f(a+t) - f(a)}{(a+t - a)} = \lim_{t \rightarrow 0} \frac{f(a+t) }{t}. $$ Since $A > 0$ is the limit, there is some $\delta > 0$ such that $$ 0 < t < \delta \Longrightarrow \; \; \; \; \; \; \frac{f(a+t) }{t} > \frac{A}{2}, $$ or $$ 0 < t < \delta \Longrightarrow \; \; \; \; \; \; f(a+t) > \left( \frac{A}{2} \right) \; t > 0. $$ REPLY [2 votes]: Let $a0$, there exists $c\in (a,b)$ such that $f(c)>f(a)=0$. Since $f'(b)>0$, there exists $d\in (c,b)$ such that $f(d) TITLE: Why is statistics considered a different discipline than mathematics rather than as a branch of mathematics? QUESTION [5 upvotes]: I see that all my understanding of statistics (& so of probability which is a branch of statistics) from high school came from the mathematics textbook and it all appears too mathematical to be accepted as a branch of mathematics but why then it isn't considered a branch of mathematics? Edit 1 The first two answers I've got are contradicting each other one is claiming that it (statistic) falls under the domain of measure theory which is a branch of mathematics. So, it's entirely mathematical. The other one is saying that they are different. REPLY [3 votes]: You won't get any consensus in answers here because probability and statistics can be both theoretical and applied. Here, roughly speaking, is how I think about these things; I'm posting this to clarify the confusion mentioned in the comments and in the edited question, not to answer the historical question of why statistics departments are often separate from math departments. (Note, however, that MIT, for example, has no statistics department. All probability and statistics courses are in the math department, or done within the various scientific or engineering departments.) Theoretical statistics (also called mathematical statistics) -- e.g. what can be found in the book by Schervish -- is probability theory applied to particular theoretical problems: inference and estimation. You could construe this as a branch of probability theory, but in practice the theory of statistical inference usually follows more foundational courses in probability theory and the theory of stochastic processes. Theoretical statistics deals with the sampling distributions, interval estimation, hypothesis testing, alternative modes of inference (Bayesian, non-parametric), etc. Statistics also has an applied component, of course. Applied statisticians work with actual data sets and use the theory done by theoretical statisticians to draw conclusions about particular empirical problems. Theoretical statisticians hardly ever look at data; they prove mathematical theorems.<|endoftext|> TITLE: What does it exactly mean if a morphism of sheaves is surjective? QUESTION [8 upvotes]: Assume that the sheaves below are sheaves of abelian groups based on a topological space $X$ as Hartshorne did in his Algebraic Geometry. So on page 64, Hartshorne introduces two notions: the sheaf $\mathscr F^+$ associated to the presheaf $\mathscr F$ and a morphism of sheaves being surjective, so a morphism $\varphi:\mathscr F\longrightarrow \mathscr G$ is surjective if im$\mathscr F$ = $\mathscr G$. However, as the definition on the same page, for any open set $U$ of $X$, im$\mathscr F(U)$ is the set of functions $s$ from $U$ to $\cup$ preim$\mathscr F_{p}$, but the sheaf $\mathscr G$ is just an abstract sheaf. What exactly does it mean that the maps in im$\mathscr F$ equal $\mathscr G$? REPLY [4 votes]: The answer to your last question is problem 4 (specifically part b) in section II.1 of Hartshorne. The essence of this exercise is this: When you are given a map of sheaves $f:\mathcal F\to \mathcal G$, the presheaf built from the images of the abelian groups can be CANONICALLY identified as a subsheaf of $\mathcal G$ (you can use the universal property of the associated sheaf to prove this. I.e, any map from a presheaf $\mathcal F$ to a SHEAF $\mathcal G$, will extend uniquely through $\mathcal F^+$ to make the relevant diagram commute.) I should add just one comment to accompany Rene's excellent recommendation to think about this in terms of stalks: The stalks of a presheaf are exactly the same as the stalks of it's associated sheaf. This scenario illustrates the well known saying that the majority of the wealth of Algebraic Geometry is hidden in the exercises of Hartshorne.<|endoftext|> TITLE: Prove that a ring R having the property that every finitely generated R-module is free is either a field or the zero ring. QUESTION [5 upvotes]: https://wj32.org/wp/wp-content/uploads/2012/12/advanced-linear-algebra.pdf https://www.physicsforums.com/threads/about-r-module.313248/ http://isites.harvard.edu/fs/docs/icb.topic256346.files/Set%207.pdf http://121.192.180.130:901/media/5225/homework13.pdf http://121.192.180.130:901/media/5252/2015-01-07abstract%20algebra32.pdf These are proofs that I can find so far. But among all of them, I cannot understand a point that: I think all of these proofs only shows that if $R/I$ is a free module over R, than R must be either a field or the zero ring. But I cannot see how does it imply the fact that: If every finitely generated R-module is free, then R must be either a field or the zero ring. Could someone tell me how to show that? Thank so much! REPLY [11 votes]: Let $I \subseteq R$ be an ideal. Then $R/I$ is an $R$-module. Moreover, it is generated by the element $\overline{1} \in R/I$. So $R/I$ is a finitely-generated $R$-module. Then by assumption $R/I$ is free. Assume for contradiction that $I$ is not equal to $0$ or $R$. First, since $R/I \neq 0$, $R/I$ is not spanned by the empty set. So it must have a basis $\{\overline{r_1}, \ldots \overline{r_k}\}$ which is nonempty. But since $I \neq 0$, we can choose some $i \in I\setminus \{0\}$, and then $i\cdot \overline{r_1} = 0$. Thus, the "basis" is actually linearly dependent. This is a contradiction, and thus we conclude that $I$ must have been $0$ or $R$, i.e. $0$ and $R$ are the only ideals of $R$. This implies that $R$ is either the zero ring or a field.<|endoftext|> TITLE: Abelian group of order 6 has exactly one element of order 2 QUESTION [5 upvotes]: I am trying to prove that an abelian group of order 6 has exactly one element of order 2. I know there is at least one by Cauchy's Theorem, so I am trying to show there are no more than one by contradiction. Suppose there are $a,b, a \neq b$ such that $a^2 = b^2 = e$. Then also $ab$ has order $2$, so we have two remaining elements (aside from $e, a, b, ab$) of which at least one of them, say $c$, has order $3$ by Cauchy. Then $ac$ has order $6$, but also $bc$ has order $6$ and there is only one element of order 6 (since $e$ has order 1, $a, b, ab$ have order 2, and $c$ has order 3) so $ac = bc$ and therefore $a = b$. This is a contradiction. Is this proof correct? Is there a "better" proof? One without Cauchy's Theorem? REPLY [5 votes]: Yes this proof is correct. Another approach might be to note that the elements of order a power of two must form a subgroup, and that the order of this subgroup must divide the order of the group. REPLY [5 votes]: Since you have Cauchy’s theorem, a slightly shorter argument is to note that there are elements $a$ and $b$ of orders $2$ and $3$, respectively, and $ab$ has order $6$, so the group is cyclic of order $6$ with $ab$ as a generator. It’s then immediate that $(ab)^3=a$ is the only element of order $2$. Added: You can in fact do this without Cauchy’s theorem. Let $G$ be the group. If $G$ is cyclic, we’re done. If there is $a\in G$ of order $2$, then $G/\langle a\rangle$ has order $3$, so there is a $b\in G$ such that $b^3\in\langle a\rangle$. But then either $b^3=e$, and $ab$ has order $6$, or $b^3=a$, and $b$ has order $6$. A similar argument handles the case of an element in $G$ of order $3$.<|endoftext|> TITLE: Any homeomorphism from $[0,1)\to [0,1)$ has a fixed point. QUESTION [6 upvotes]: Show that Any homeomorphism from $[0,1)\to [0,1)$ has a fixed point. My try: Suppose that $f(x)\neq x$ for all $x$,then either $f(x)>x$ or $f(x)a$ and $f(b)x$ for all $x$ ; Also if $f^{-1}(x)>x$ then by above we have $f(f^{-1}(x))>f^{-1}(x)>x\implies x>x $ false .Hence $f^{-1}(x)\le x$ for some $x$ . But I can't complete the proof from here.Please give some hints so that I can take it forward. REPLY [2 votes]: Let $I_1$ and $I_2$ be intervals on the real line. Let $f$ be a homeomorphism between them. Use IVP (the Intermediate Value Property) to show that $f$ is either increasing or decreasing. Deduce that $f$ takes endpoints of $I_1$ to endpoints of $I_2$. Conclude that if $I_1 = I_2 = [0,1)$, then...<|endoftext|> TITLE: Join of $S^1$ with $S^1$ gives $S^3$. QUESTION [5 upvotes]: Problem. I want to prove that $S^1*S^1$ is homeomorphic to $S^3$, that is, the join of two copies of $S^1$ is homeomorphic to $S^3$. (Writing $I$ to denote the closed unit interval, the join of two spaces $X$ and $Y$ is defined as $(X\times Y\times I)/\sim$, where $\sim$ is an equivalence relation which identifies $(x, y_1 0)$ with $(x, y_2, 0)$ and $(x_1, y, 1)$ with $(x_2, y, 1)$ for all $x_1, x_2\in X$ and $y_1, y_2\in Y$.) I tired the following. Think of $S^1$ as $I/\partial I$, and write $\pi:I\to S^1$ to denote the natural projection map. Then we have a map $f:(I\times I)\times I\to (S^1\times S^1)\times I$ which sends $(x, y, t)$ to $(\pi(x), \pi(y), t)$. Let $q:(S^1\times S^1)\times I\to S^1*S^1$ be the natural map coming from the equivalence relation $\sim$. Thus we have a surjective continuous map $q\circ f:(I\times I)\times I\to S^1*S^1$. Write $q\circ f$ as $g$. Since the domain of $g$ is compact, we know that if $\simeq_g$ is the equivalence relation on $I^3$ induced by $g$, then $I^3/\simeq_g$ is homeomorphic to $S^1*S^1$. I was sure that $\simeq_g$ would turn out to be such that it identifies all points of $\partial I^3$ and no point in the "interior" of $I^3$ with any other point. So that we would have $I^3/\simeq_g = I^3/\partial I^3$, which is homeomorphic to $S^3$. But to my surprise this is not the case! For example, consider the points $p:=(1/2, 1/2, 0)$ and $q:=(1/2, 1/2, 1)$ in $I^3$. Then $g(p)\neq g(q)$. What is weirder is that $\simeq_g$ does make $I^3/\simeq_g$ homeomorphic to $S^3$ nevertheless, despite the fact that equivalence classes induced on $I^3$ by $\simeq_g$ are finer that the equivalence classes on $I^3$ induced by identifying all points in $\partial I^3$ to one point. Can anybody please provide a proof and if possible comment on the (apparent) weird phenomenon happening above (or point out a mistake somewhere). Thank you. REPLY [3 votes]: The map $g$ induces the following identifications $$ (x,y,0) \sim (x,y',0), \quad (x,0,z) \sim (x,1,z), \quad (x,y,1) \sim (x',y,1), \quad (0,y,z) \sim (1,y,z) $$ for all $x,x',y,y',z \in I$. The second and the fourth relation come from the usual identifications turning the unit square into a torus, while the first and the third relation stem from the construction of the join. The first identification degenerates one side of the cube to a line, so we get a "prism", and the second identification folds the $(y=0)$-side to the $(y=1)$-side, which gives us a solid cylinder. The $(z=1)$-side of the cube has now become the hollow cylinder, and here each interval ranging from a point in one end of the cylinder to its opposite point in the other end is now collapsed to a point. This yields a ball $D^3$. As a last step, we identify every point in one half-sphere of the ball's boundary to the point in the other half-sphere which has the same $y$ and $z$ coordinates, and this produces $S^3$. If you have problem visualizing the last step, go down one dimension and think of the disk $D^2$. When we identify points in $\partial D^2$ that lie above each other we are basically folding the disk around a sphere $S^2$, and the boundary becomes a line from one point in the sphere to its opposite point. The phenomenon you describe is nothing to worry about. It is well possible that a coarser relation produces the same space $Y$ as a finer relation, and as a consequence, this space $Y$ has a relation $R$ such that $Y/R \approx Y$. For example, if you collapse a longitude from the north pole to the south pole to a point you still have a sphere.<|endoftext|> TITLE: What are the properties of eigenvalues of permutation matrices? QUESTION [21 upvotes]: Up till now, the only things I was able to come up/prove are the following properties: $\prod\lambda_i = \pm 1$ $ 0 \leq \sum \lambda_i \leq n$, where $n$ is the size of the matrix eigenvalues of the permutation matrix lie on the unit circle I am curious whether there exist some other interesting properties. REPLY [31 votes]: A permutation matrix is an orthogonal matrix (orthogonality of column vectors and norm of column vectors = 1). As such, because an orthogonal matrix "is" an isometry $$\tag{1}\|PV\|=\|V\|$$ If $V$ is an eigenvector associated with eigenvalue $\lambda$, substituting $PV=\lambda V$ in (1) we deduce $$|\lambda|=1.$$ Moreover, as $P^p=I_n$ ($p$ is the order of the permutation) these eigenvalues are such that $\lambda^p=1$; therefore $$\lambda=e^{i k 2\pi/p}$$ for some $k \in \mathbb{Z}$. Let us take an example: consider the following permutation decomposed into the product of two disjoint support cycles a cycle $\color{red}{(5 4 3 2 1)}$ of order $5$ and a cycle $\color{blue}{(6 7 8)}$ of order $3$. Its associated matrix is: $$\left(\begin{array}{ccccc|ccc} 0 & \color{red}{1} & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & \color{red}{1} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \color{red}{1} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & \color{red}{1} & 0 & 0 & 0\\ \color{red}{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & \color{blue}{1}\\ 0 & 0 & 0 & 0 & 0 & \color{blue}{1} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & \color{blue}{1} & 0\end{array}\right)$$ Its cycle structure is reflected (see picture) into the five eigenvalues $\color{red}{e^{2i k\pi/5}}$ and the three eigenvalues $\color{blue}{e^{2i k\pi/3}}$. Please note that eigenvalue $1$ is - in a natural way - a double eigenvalue, and more generally with multiplicity $m$ if the permutation can be decomposed into $m$ disjoint cycles.<|endoftext|> TITLE: Find the determinant of $I + A$ QUESTION [11 upvotes]: What is the determinant of an $n \times n$ matrix $B=A+I$, where $I$ is the $n\times n$ identity matrix and $A$ is the $n\times n$ matrix $$ A=\begin{pmatrix} a_1 & a_2 & \dots & a_n \\ a_1 & a_2 & \dots & a_n \\ \vdots & \vdots & \ddots & \vdots \\ a_1 & a_2 & \dots & a_n \\ \end{pmatrix} $$ REPLY [13 votes]: Using the Weinstein-Aronszajn determinant identity, $$\det \left({\bf I}_n + {\bf 1}_n {\bf a}^{\top} \right) = \det \left( 1 + {\bf a}^{\top} {\bf 1}_n \right) = 1 + {\bf a}^{\top} {\bf 1}_n = 1 + \sum_{i=1}^n a_i$$ REPLY [10 votes]: First look at what happens when $n=2$, $n=3$, renaming our matrix $B_n(a_1,\cdots,a_n)$ one gets $$\begin{align}\det{B_2(a_1,a_2)}&=1+a_1+a_2\\\det{B_3(a_1,a_2,a_3)}&=1+a_1+a_2+a_3\end{align}$$ Assume that $\det{B_{n-1}(a_1,\cdots,a_{n-1})}=1+a_1+\cdots a_{n-1}$ and consider $$\begin{vmatrix}1+a_1 & a_2 & \cdots & a_n\\ a_1 & 1+a_2 & \cdots & a_n\\ \vdots & \vdots & \ddots & \vdots\\ a_1 & a_2 & \cdots & 1+a_n \end{vmatrix}$$ Substract the second row from the first to get $$\begin{vmatrix} 1 & -1 & 0 & \cdots & 0\\ a_1 & 1+a_2 & a_3 &\cdots & a_n\\ a_1 & a_2 & 1+a_3 & \cdots & a_n\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ a_1 & a_2 & a_3 &\cdots & 1+a_n \end{vmatrix}$$ Then add the first column to the second and get $$\begin{vmatrix} 1 & 0 & 0 & \cdots & 0\\ a_1 & 1+a_1+a_2 & a_3 &\cdots & a_n\\ a_1 & a_1+a_2 & 1+a_3 & \cdots & a_n\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ a_1 & a_1+a_2 & a_3 &\cdots & 1+a_n \end{vmatrix}$$ Developing along the first row one gets $$\det{B_n(a_1,\cdots,a_n)}=\det{B_{n-1}(a_1+a_2,\cdots ,a_n)}$$ And this by the induction assumption is $$1+a_1+a_2+\cdots+a_n$$<|endoftext|> TITLE: Application of Kronecker Weber Theorem QUESTION [7 upvotes]: I know this may be a very naive question. Please forgive my naivety. What are some of the applications of the Kronecker-Weber Theorem? REPLY [7 votes]: A theorem like that of Kronecker and Weber is not measured in terms of applications, it is measured in terms of insight and the potential to generate powerful generalizations. It has given rise to Kronecker's theory of complex multiplications and to one of Hilbert's 23 problems, and is a guiding theorem for classical class field theory. It has various proofs, each of which uses an important technique (Lagrange resolvents and Stickelberger, Hilbert's abelian crossings, decomposition and ramification groups, local fields and a lot more).<|endoftext|> TITLE: Find all positive solutions such that $n^2 + 9n +1$ is a perfect square QUESTION [6 upvotes]: My Attempt $n^2 + 9n +1$ is obviously not a perfect square when it is in between consecutive squares, or: $n^2 + 8n +16 < n^2 + 9n +1 0$ and $$n^2+9n+1=k^2,$$ for some $k\ge 1$. Then $(k+1)(k-1)=n(n+9)>n(n+2)$. Thus $k-1>n$. Let $k-1=n+v$, where $v$ a positive integer. Then $$ (n+v+2)(n+v)=n(n+9), $$ and hence $$ n(7-2v)=v^2+2v. $$ Hence, if such $v$ exists, it can only be $v\in\{1,2,3\}$. The cases $v=1,2$ lead to contradiction. Indeed, for $v=1$, we have $5n=3$, and for $n=2$, we have $3n=8$. Meanwhile, for $v=3$, we have $$ n=15, $$ for which $n^2+9n+1=361=19^2$. REPLY [2 votes]: We want to solve $$m^2=n^2+9n+1$$ over the integers. Multiplying with $4$ gives $$4m^2=4n^2+36n+4=(2n+9)^2-77$$ So, we have $$(2m-2n-9)\cdot (2m+2n+9)=-77$$ For every integer pair $(a,b)$ with $ab=-77$ solve the equation system $$2m-2n-9=a$$ $$2m+2n+9=b$$ (adding the two equations shows that the solution is always unique) and verify whether $m$ and $n$ are integers and $n$ is positive.<|endoftext|> TITLE: Tiling a cylindrical piece of paper QUESTION [11 upvotes]: Imagine a piece of paper. It has a square grid of 1x1 on it, so that every square has an area of 1 cm(squared). That piece of paper was folded into the shape of an (empty, hollow) cylinder whose length is 50 cm and whose base circumference is also 50 cm (look at the picture below). Can you cover the area of that cylinder with the shape on picture b, which is made up of 4 squares and is also of dimensions 1x1? REPLY [14 votes]: It is impossible : The number of squares in cylinder is $50^2$ And we color black or white in them, like chess board Hence at block in b) we have two coloring ways : 3 black and 1 white, 1 black and 3 white If the number of blocks of first type is $x$ and the number of second types is $y$, then $$ 3x+y=50^2/2 $$ $$ x+3y=50^2/2 $$ so that $ x+y=625$ I will receive JeanMarie's suggestion and I will use TonyK's argument : $x+y=625$ That is the number of 4-square-tiles is odd Hence WLOG we can assume that $x$ is odd and $y$ is even Hence the number of black squares in cylinder is $$50^2/2=3x+y$$ Right hand side is even and left hand side is odd. Hence it is a contradiction.<|endoftext|> TITLE: Is any type of geometry $not$ "infinitesimally Euclidean"? QUESTION [12 upvotes]: Question: Is there any (absolute) geometry which is not "infinitesimally Euclidean"? Context: All of the geometries listed on the Wikipedia page "Foundations of Geometry" (describing axiomatic formulations of geometry) seem to correspond to special cases of absolute geometry, and it seems like any absolute geometry is either hyperbolic, elliptic, or Euclidean (parabolic?) according to the version of the parallel postulate used, perhaps equivalently according to the type of curvature of the underlying geometric space. These all seem to have realizations or models as Riemannian manifolds of some sort. Any geometry of (smooth) manifolds seems to be infinitesimally Euclidean, even for those without a Riemannian metric, since each neighborhood is (diffeomorphic) homeomorphic to Euclidean space. Hyperbolic geometry seems to be the study of Riemannian manifolds with negative curvature, elliptic geometry the study of Riemannian manifolds with positive curvature, and Euclidean geometry the special case where there is no curvature. But obviously every neighborhood of a Riemannian manifold is diffeomorphic Euclidean space, thus even the Riemannian geometry of spaces like the torus, which is neither strictly elliptic nor hyperbolic, is infinitesimally Euclidean. Thus it seems like to me that all of the elementary geometric axioms determine every aspect of the geometric space (e.g. that it must be a Riemannian manifold) except the curvature -- thus changes in the parallel postulate seem to correspond to different values of the curvature of the space. Am I understanding this correctly? I had thought previously that the term geometry could be applied to spaces so abstract that they could not be embedded in any Euclidean space, and in particular were not infinitesimally Euclidean, but now I am not so sure. Any clarification would be appreciated. The question stems in part from my reading of Agricola and Friedrich's "Elementary Geometry" (also of the original German version), so perhaps if you have read some of that book as well you might understand better the source of my misunderstanding. REPLY [7 votes]: I know this is an old question, but I just discovered it, and I'm a little confused by the responses from others, because there is an obvious problem with the following claim you make: Any geometry of (smooth) manifolds seems to be infinitesimally Euclidean, even for those without a Riemannian metric, since each neighborhood is (diffeomorphic) homeomorphic to Euclidean space. You are confusing the category of smooth manifolds with the category of Riemannian manifolds, or the subject of differential topology with the subject of differential geometry. More specifically, it makes no sense to talk about “geometry” in the category of mere smooth manifolds (even though we often learn about the apparatus of bare smooth manifolds in courses called “differential geometry”). The “geometry” is defined by the Riemannian metric and is something over and above the bare smooth structure. There is thus no sense at all to the statement that “the geometry of any smooth manifold is infinitesimally Euclidean”: there’s no geometry to speak of in the first place! Mere diffeomorphisms don’t capture, transport, or preserve geometry; isometries do. So yes, every smooth manifold is locally diffeomorphic to Euclidean space, and yes this has nothing to do with whether or not the manifold is endowed with a metric structure. There are thus no local invariants in differential topology, for the reason you state: all objects in the category of smooth manifolds are locally diffeomorphic. But there are local invariants in differential geometry; not all objects in that category are locally isometric.<|endoftext|> TITLE: A chess knight's moves QUESTION [6 upvotes]: A chess knight's horse has hurt its leg, and because of that it has to step on every field on its path in order to move in the shape of the letter L. Also try to imagine a chess board: for example when the knight tries to jump from the square a1 to the square b3, our injured knight has to step on fields a2 and a3 or on fields b1 and b2. Will that knight be able to step on all fields on a board of dimensions 5x11 in such a way that it will step on every field of that board? REPLY [8 votes]: Not satisfied with a single solution, I ran a python script to enumerate all possible solutions. It produced the following 9 tours (28 tours before discarding symmetric ones). # +-#-+-+ +-# +-#-+-+ | | | | | | | + + +-+-# + + + +-+-# | | | | | | | +-# # #-+ # +-# # #-+ | | | | | | | #-+-+ + + + #-+-+ + + | | | | | | | +-+-#-+ #-+ +-+-#-+ # # +-#-+-+ +-#-+-+ #-+ | | | | | | | + + +-+-# + +-+-# + + | | | | | | | +-# # #-+ # # +-#-+ # | | | | | | | #-+-+ + + + + + #-+-+ | | | | | | | +-+-#-+ #-+ +-# +-+-# # +-#-+-+ +-+-#-+ #-+ | | | | | | | + + +-+-# #-+-+ + + + | | | | | | | +-# # #-+ #-+ # #-+ # | | | | | | | #-+-+ + + + + + #-+-+ | | | | | | | +-+-#-+ #-+ #-+ +-+-# #-+-+ #-+ +-#-+-+ #-+ | | | | | | | +-+-# + + + +-+-# + + | | | | | | | # +-#-+ # # # +-#-+ # | | | | | | | + + #-+-+ + + + #-+-+ | | | | | | | +-# +-+-#-+ +-# +-+-# #-+-+ #-+ +-+-#-+ #-+ | | | | | | | +-+-# + + #-+-+ + + + | | | | | | | # +-#-+ # #-+ # #-+ # | | | | | | | + + #-+-+ + + + #-+-+ | | | | | | | +-# +-+-#-+ #-+ +-+-# #-+-+ +-# +-+-#-+ #-+ | | | | | | | +-+-# + + #-+-+ + + + | | | | | | | # +-# # +-#-+ # #-+ # | | | | | | | + + + +-+-# + + #-+-+ | | | | | | | +-# +-#-+-+ #-+ +-+-# #-+-+ +-#-+-+ +-#-+-+ | | | | | +-+-# + +-+-# + +-+-# | | | | | # +-# # #-+-+ # #-+-+ | | | | | | | + + + +-+-# # +-+-# # | | | | | | +-# +-#-+-+ +-+-#-+-+ #-+-+ +-#-+-+ +-+-#-+ | | | | | +-+-# + +-+-# # +-# + | | | | | | | # +-# # #-+-+ + + + # | | | | | | | | + + + +-+-# # +-# +-# | | | | | | +-# +-#-+-+ +-+-#-+-+ #-+-+ +-+-#-+ #-+ #-+ | | | | | | | +-+-# #-+-+ + + + + + | | | | | | | # +-# +-# # #-+ #-+ # | | | | | | | + + + + + +-+-# #-+-+ | | | | | | | +-# +-# +-#-+-+ +-+-# Here's the python script. Mostly brute-force, pruning out situations where the unvisited squares are not connected or have 2 vertices with only one adjacent unvisited square (since the knight must end his tour at such a square). Runs in under a minute. BOARD_WIDTH = 11 BOARD_HEIGHT = 5 on_board = lambda (x,y): 0 <= x < BOARD_WIDTH and 0 <= y < BOARD_HEIGHT adjacent = lambda (x,y): [p for p in [(x-1,y),(x+1,y),(x,y-1),(x,y+1)] if on_board(p)] VERTICES = set((x,y) for x in range(BOARD_WIDTH) for y in range(BOARD_HEIGHT)) def is_admissible(cur, visited): # count "alcoves", places the knight must end his tour # if there are more than one, a tour doesn't exist. alcove_count = 0 for v in VERTICES: if v in visited: continue # squares next to the knight don't count as alcoves if abs(cur[0]-v[0]) + abs(cur[1]-v[1]) == 1: continue deg = len(filter(lambda p:not p in visited, adjacent(v))) if deg == 1: alcove_count += 1 if alcove_count > 1: # print 'too many alcoves.' return False unvisited = VERTICES - visited for u in unvisited: break found = [u] unvisited.remove(u) while found: u = found.pop() for v in adjacent(u): if v in unvisited and not v in visited and not v in found: found.append(v) unvisited.remove(v) if unvisited: # print 'not connected.' return False return True def search(path): global longest_so_far if len(path) == len(VERTICES): yield path return cur = path[-1] visited = set(path) if not is_admissible(cur, visited): return for L in [ [(1,0),(2,0),(2,1)], [(1,0),(2,0),(2,-1)], [(1,0),(1,1),(1,2)], [(1,0),(1,-1),(1,-2)], [(-1,0),(-2,0),(-2,1)], [(-1,0),(-2,0),(-2,-1)], [(-1,0),(-1,1),(-1,2)], [(-1,0),(-1,-1),(-1,-2)], [(0,1),(0,2),(1,2)], [(0,1),(0,2),(-1,2)], [(0,1),(1,1),(2,1)], [(0,1),(-1,1),(-2,1)], [(0,-1),(0,-2),(1,-2)], [(0,-1),(0,-2),(-1,-2)], [(0,-1),(1,-1),(2,-1)], [(0,-1),(-1,-1),(-2,-1)], ]: new_path = [(l[0]+cur[0], l[1]+cur[1]) for l in L] if all(on_board(p) and not p in visited for p in new_path): for solution in search(path + new_path): yield solution def path_to_string(path): chars = [[' ']*(BOARD_WIDTH*2-1) for _ in range(BOARD_HEIGHT*2-1)] for i,(x,y) in enumerate(path): chars[2*y][2*x] = '#' if i % 3 == 0 else '+' for i in range(len(path)-1): (x0,y0),(x1,y1) = path[i:i+2] chars[y0+y1][x0+x1] = '|' if x0 == x1 else '-' return '\n'.join(''.join(row) for row in chars) def canonize_path_string(s): a = s b = s[::-1] c = '\n'.join(s.split('\n')[::-1]) d = c[::-1] return min(a,b,c,d) path_strings = set() for v in VERTICES: for path in search([v]): path_strings.add(canonize_path_string(path_to_string(path))) print '\n\n'.join(sorted(path_strings)) Edit: I also ran it on an 8x8 board. This took much longer and produced the following 52 solutions: http://pastebin.com/pThG6PsD<|endoftext|> TITLE: Which functions are continuous but nowhere Holder continuous for 0 0,$ $a\in [0,1],$ $C>0,$ and $U$ is a neighborhood of $a$ relative to $[0,1],$ then $$|g(x)- g(a)| \le C|x-a|^\alpha$$ fails to hold for some $x\in U.$ This $g$ is thus "nowhere Holder" in a very strong sense. (You never defined "nowhere Holder", btw.) So you have the claim left to verify.<|endoftext|> TITLE: Show that a nonabelian group must have at least five distinct elements QUESTION [14 upvotes]: Show that a nonabelian group must have at least five distinct elements. I just learn abstract algebra by self study. I want help to solve this problem. Just give me a hint. REPLY [44 votes]: You need an instance of $ab\ne ba$. That requires $a\ne b$. Also $a\ne 1$ and $b\ne 1$ as $1$ commutes. Also, $a,b$ are not inverse of each other as those commute. Hence $1, a, b, ab, ba$ are pairwise distinct REPLY [14 votes]: In fact it must have at least $6$ elements. You can discard the possibilities of the group having exactly $1$ element immediately. You can discard the possibility of the group having a prime number of elements because any such groups are cyclic, so $2,3$ and $5$ are discarded. It remains to show that no non-abelian group with $4$ elements exists. If it has an element of order $4$ then it is cyclic, otherwise every element must have order $2$ or $1$. And a group in which this happens is abelian, since $(ab)^2=e=a^2b^2$ REPLY [12 votes]: Alternative solution: Suppose it is not abelian, then it has two elements $a$ and $b$ that do not commute, hence the group contains $e,a,b,ab,ba$ and must have at least $5$ elements. REPLY [10 votes]: Hint: Try to make a list of all groups of orders $1,2,3,4$ (up to isomorphism). There are not many (five to be precise) and you will see, they are all abelian. (One might add that there is also only one of order $5$ and it is also abelian.)<|endoftext|> TITLE: "How would Gauss proceed?" QUESTION [5 upvotes]: I am looking for universities with a graduate program in the United States. I started with Princeton (dreaming is free:) and learned that in order to begin working on their theses the students have to spend a year studying there, and pass a General Examination. I got curious about how difficult this tests would be. They have the tradition to post records online of past exams so, after nosing for a while, I found this question by Professor Sarnak in a 2008 exam; he just had asked the student how would he prove that $\mathbb{Q}(\sqrt{-163})$ is a principal ideal domain when he asks him "How would Gauss proceed?". At first it sounded to me like an unfair and ridiculous question, but who am I to contradict Professor Sarnak. Do you know if there are actually enough reason to tell how would Gauss proceed? And if that is the case, what would be his argument? (I know that "enough" is not very precise and that it can be thought as a matter of opinion. I will content myself with plausible reasons motivating a specific argument Gauss could have used to answer the above question.) REPLY [2 votes]: Gauß proved that ${\mathbb Z}[i]$ is a PID using the fact that the class number of forms with discriminant $-4$ is $1$. On the other hand, Gauß only considered quadratic forms with even middle coefficient, so in the case of discriminant $-163$ he would have been forced to use the fact that the number of classes of forms with discriminant $-163$ is $3$, and the rest of the proof would then require additional arguments. I don't think, however, that this was the point of the question, which was aimed at getting binary quadratic forms as an answer.<|endoftext|> TITLE: What is the volume of the $3$-dimensional elliptope? QUESTION [19 upvotes]: My question Compute the following double integral analytically $$\int_{-1}^1 \int_{-1}^1 2 \sqrt{x^2 y^2 - x^2 - y^2 + 1} \,\, \mathrm{d} x \mathrm{d} y$$ Background The $3$-dimensional elliptope is the spectrahedron defined as follows $$\mathcal E_3 := \Bigg\{ (x_{12}, x_{13}, x_{23}) \in \mathbb R^3 : \begin{bmatrix} 1 & x_{12} & x_{13}\\ x_{12} & 1 & x_{23}\\ x_{13} & x_{23} & 1\end{bmatrix} \succeq 0 \Bigg\}$$ Using Sylvester's criterion for positive semidefiniteness (i.e., all $2^3-1 = 7$ principal minors are nonnegative), we obtain $1 \geq 0$ (three times), the three quadratic inequalities $$1 - x_{12}^2 \geq 0 \qquad \qquad \qquad 1 - x_{13}^2 \geq 0 \qquad \qquad \qquad 1 - x_{23}^2 \geq 0$$ and the cubic inequality. $$\det \begin{bmatrix} 1 & x_{12} & x_{13}\\ x_{12} & 1 & x_{23}\\ x_{13} & x_{23} & 1\end{bmatrix} = 1 + 2 x_{12} x_{13} x_{23} - x_{12}^2 - x_{13}^2 - x_{23}^2 \geq 0$$ Thus, $\mathcal E_3$ is contained in the cube $[-1,1]^3$. Borrowing the pretty figure in Eisenberg-Nagy & Laurent & Varvitsiotis, here is an illustration of $\mathcal E_3$ What is the volume of $\mathcal E_3$? Motivation Why is $\mathcal E_3$ interesting? Why bother? Because $\mathcal E_3$ gives us the set of $3 \times 3$ correlation matrices. My work For convenience, $$x := x_{12} \qquad\qquad\qquad y := x_{13} \qquad\qquad\qquad z := x_{23}$$ I started with sheer brute force. Using Haskell, I discretized the cube $[-1,1]^3$ and counted the number of points inside the elliptope. I got an estimate of the volume of $\approx 4.92$. I then focused on the cubic surface of the elliptope $$\det \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} = 1 + 2 x y z - x^2 - y^2 - z^2 = 0$$ which I rewrote as follows $$z^2 - (2 x y) z + (x^2 + y^2 - 1) = 0$$ Using the quadratic formula, I obtained $$z = x y \pm \sqrt{x^2 y^2 - x^2 - y^2 + 1}$$ Integrating using Wolfram Alpha, $$\int_{-1}^1 \int_{-1}^1 2 \sqrt{x^2 y^2 - x^2 - y^2 + 1} \,\, \mathrm{d} x \mathrm{d} y = \cdots \color{gray}{\text{(magic happens)}} \cdots = \color{blue}{\frac{\pi^2}{2} \approx 4.9348}$$ I still would like to compute the double integral analytically. I converted to cylindrical coordinates, but did not get anywhere. Other people's work This is the same value Johnson & Nævdal obtained in the 1990s: Thus, the volume is $$\left(\frac{\pi}{4}\right)^2 2^3 = \frac{\pi^2}{2}$$ However, I do not understand their work. I do not know what Schur parameters are. Haskell code Here's the script: -- discretization step delta = 2**(-9) -- discretize the cube [-1,1] x [-1,1] x [-1,1] grid1D = [-1,-1+delta..1] grid3D = [ (x,y,z) | x <- grid1D, y <- grid1D, z <- grid1D ] -- find points inside the 3D elliptope points = filter (\(x,y,z)->1+2*x*y*z-x**2-y**2-z**2>=0) grid3D -- find percentage of points inside the elliptope p = (fromIntegral (length points)) / (1 + (2 / delta))**3 After loading the script: *Main> delta 1.953125e-3 *Main> p 0.6149861105903861 *Main> p*(2**3) 4.919888884723089 Hence, approximately $61\%$ of the grid's points are inside the elliptope, which gives us a volume of approximately $4.92$. A new Buffon's needle A symmetric $3 \times 3$ matrix with $1$'s on the main diagonal realizations of the random variable whose PDF is uniform over $[-1,1]$ on the entries off the main diagonal is positive semidefinite (and, thus, a correlation matrix) with probability $\left(\frac{\pi}{4}\right)^2$. Estimating the probability, we estimate $\pi$. Using the estimate given by the Haskell script: *Main> 4 * sqrt 0.6149861105903861 3.1368420058151125 References Cynthia Vinzant, What is a... Spectrahedron?, Notices of the AMS, Volume 61, Number 5, May 2014. Grigoriy Blekherman, Pablo A. Parrilo, Rekha R. Thomas, Semidefinite Optimization and Convex Algebraic Geometry, SIAM, March 2013. Marianna Eisenberg-Nagy, Monique Laurent, Antonios Varvitsiotis, Complexity of the positive semidefinite matrix completion problem with a rank constraint, arXiv:1203.6602. C. R. Johnson, G. Nævdal, The probability that a (partial) matrix is positive semidefinite, in Recent Progress in Operator Theory, International Workshop on Operator Theory and Applications, IWOTA 95, Regensburg, July 31–August 4, 1995. REPLY [12 votes]: Then integrand factors as $\sqrt{(1-x^2)(1-y^2)}=\sqrt{(1-x^2)}\sqrt{(1-y^2)}$ and every factor can be integrated separately. But you recognize the integral for the area of a half circle of radius $1$, hence $$I=2\left(\frac\pi2\right)^2.$$<|endoftext|> TITLE: Is this sum on power free number and Euler constant true? QUESTION [7 upvotes]: Let $p_{n,r}$ be $r$-th smallest $n$-power free number. For example $p_{2,7}$ is the $7$-th square free number and $p_{5,7}$ is the $7$-th smallest $5$-th power free number. Let $q_{n,r}$ be the $r$-th number which contains (or is divisible by) a $n$-th power. For example $q_{2,7}$ is the $7$-th smallest number which contains a square and $q_{5,7}$ is the $7$-th smallest number which contains a fifth power. We define $\alpha_n$ for $n \ge 2$ as the ratio of the sum of the $n$-th power free number to the sum of the $n$-th power containing numbers i.e. $$ \alpha_n = \lim_{r \to \infty}\frac{p_{n,1}+p_{n,2}+\ldots + p_{n,r}}{q_{n,1}+q_{n,2}+\ldots + q_{n,r}}. $$ Question: Is it true that $$ \sum_{n = 2}^{\infty} \frac{\alpha_n}{n} = 1 - \gamma $$ where $\gamma$ is the Euler-Mascheroni constant? Motivation: I ran a program and the sum seem to converge to 0.422785 which is close to $1-\gamma$. REPLY [5 votes]: Update. The equality is true. Since $$Q_{n}\left(x\right)=\frac{x}{\zeta\left(n\right)}+O_{n}\left(x^{1/n}\right) $$ we have, using Abel's summation, that $$ \sum_{p_{n}\leq N}p_{n}=Q_{n}\left(N\right)N-\int_{1}^{N}Q_{n}\left(t\right)dt $$ $$=\frac{N^{2}}{2\zeta\left(n\right)}+O_{n}\left(N^{1+1/n}\right). $$ In the same spirit we get $$ \sum_{q_{n}\leq N}q_{n}=\left(1-Q_{n}\left(N\right)\right)N-\int_{1}^{N}\left(1-Q_{n}\left(t\right)\right)dt $$ $$=\frac{N^{2}}{2}\left(1-\frac{1}{\zeta\left(n\right)}\right)+O_{n}\left(N^{1+1/n}\right) $$ so we have $$\alpha_{n}=\frac{p_{1,n}+p_{2,n}+\dots}{q_{1,n}+q_{2,n}+\dots}=\lim_{N\rightarrow\infty}\frac{\sum_{p_{n}\leq N}p_{n}}{\sum_{q_{n}\leq N}q_{n}}=\zeta\left(n\right)-1 $$ hence $$\sum_{n\geq2}\frac{\alpha_{n}}{n}=\sum_{n\geq2}\frac{\zeta\left(n\right)-1}{n}=\color{red}{1-\gamma}$$ as wanted (for a proof of the last idenity see here).<|endoftext|> TITLE: Understanding the difference between a Flat, a locally flat and an Euclidean space QUESTION [8 upvotes]: Suppose a Riemannian manifold is such that the metric tensor in a coordinate system is given by $$g_{ij}=\delta_{ij}$$ so that $$ds^2=g_{ij}dx^i dx^j=(dx^1)^2+(dx^2)^2+(dx^3)^2$$ (i) Is this space called (a) flat or locally flat, (b) Euclidean or locally Euclidean? (ii) Can we think of a simple space which is locally flat but not locally Euclidean or vice-versa? (iii) What is the difference between a flat space and an Euclidean space? (iv) What is a space called, if it has a metric $g_{ij}=C_i\delta_{ij}$ (where $C_i$ are some constants independent of the coordinates)? Not-too-technical answer will be helpful because my understanding of differential geometry and manifolds is limited. REPLY [15 votes]: I think this question deserves an answer here in physics SE, since these notions are very common in Relativity. For a manifold $M$ equipped with a metric (either Euclidean or Lorentzian) $g$, locally flat means that every point $p\in M$ admits an opens set $U \ni p$ endowed with coordinates $$\psi : U \ni q \mapsto (x^1(q),\ldots, x^n(q)) \in \psi(U) \subset \mathbb R^n$$ such that the metric in these coordinates takes the constant canonical form $$g_{ab}(q) = \delta_{ab}\quad \mbox{Riemannian case}\:,$$ or $$g_{ab}(q) = \eta_{ab}\quad \mbox{Lorentzian case}\:.$$ The point is that there is no guarantee that there is such a $U$ which covers the whole manifold. For this reason we use the adverb locally in front of flat. An example is a two dimensional cylinder in $\mathbb R^3$ equipped with the metric induced by the standard metric on $\mathbb R^3$. For a manifold $M$ equipped with a metric (either Euclidean or Lorentzian) $g$, globally flat means that there is a global coordinate system $$\psi : M \ni q \mapsto (x^1(q),\ldots, x^n(q)) \in \psi(M) \subset \mathbb R^n$$ such that the metric in these coordinates takes the constant canonical form $$g_{ab}(q) = \delta_{ab}\quad \mbox{Euclidean case}\:,$$ or $$g_{ab}(q) = \eta_{ab}\quad \mbox{Lorentzian case}\:.$$ In this case $(M,g)$ is isometrically identified with an open portion (possibly the whole space) of $\mathbb R^n$ equipped with the standard (Euclidean or Lorentzian) metric. In view of the definitions above, the answer to your question (i) depends on the extension of your coordinate system. If it covers the whole manifold we have a globally flat manifold. If it is not the case, we can only say that a region of the manifold, the one covered by the coordinates, is flat. Flat space and Euclidean space have more or less the same meaning. They usually indicate Minkowski spacetime (i.e. a spacetime isometrically isomorphic to $\mathbb R^n$ equipped with the standard Minkowsky metric $\eta_{ab}$) or $\mathbb R^n$ equipped with the standard Euclidean metric $\delta_{ab}$. However are a bit vague terms whose meaning (if global or local) should be understood from the context. Regarding your last question. If all $C_i$ are positive, the metric can be recast into the standard form $\delta_{ij}$ simply rescaling (changing) the coordinates with a factor $1/\sqrt{C_i}$. If some constant is negative, with a similar procedure the metric can be reduced into a constant form $h_i \delta_{ij}$ where $h_i = \pm 1$, which is Lorentzian only if all $h_i$ are $+1$ but one (or all $h_i$ are $-1$ but one depending on the adopted convention). It is impossible that a $C_i$ vanishes because it would imply $det [g_{ij}]=0$ which is not admitted by definition of (non-degenerate) metric. ADDENDUM. An apparently related question concerns the so called locally flat coordinates around a given point. This is an independent issue. These coordinates are also called Riemannian normal coordinates or Gaussian normal coordinates. Are coordinates $x^1,\ldots, x^n$ defined in a neighborhood of a point $p\in M$ such that exactly at $p$ the components of the metric have their canonical form (e.g. $g_{ab}(p)=\delta_{ab}$ or $g_{ab}(p)=\eta_{ab}$ depending on the nature of the metric) and $\frac{\partial g_{ab}}{\partial x^c}|_p =0$. It is possible to prove that every manifold equipped with a smooth metric admits such a coordinate system for every given point $p\in M$. It is also possible to extend the definition referring to a geodesic $\gamma\subset M$ instead of a point $p$: Exactly on $\gamma$ the components of the metric have their canonical form (e.g. $g_{ab}(\gamma)=\delta_{ab}$ or $g_{ab}(\gamma)=\eta_{ab}$ depending on the nature of the metric) and $\frac{\partial g_{ab}}{\partial x^c}|_\gamma =0$. This form gives rise to a very precise mathematical description of Einstein's equivalence principle in GR when $\gamma$ is a timelike geodesic.<|endoftext|> TITLE: Tricky inequality no avail to AM-GM QUESTION [6 upvotes]: Let $a,b,c$ be 3 distinct positive real numbers such that abc = 1. Prove that $$\frac{a^3}{\left(a-b\right)\left(a-c\right)}\ +\frac{b^3}{\left(b-c\right)\left(b-a\right)}\ +\ \frac{c^3}{\left(c-a\right)\left(c-b\right)}\ \geq 3$$ I tried AM-GM in many different ways, but it doesn't work since one of the terms on the LHS inevitably becomes negative. Any help is greatly appreciated. REPLY [3 votes]: By AM-GM $$\sum\limits_{cyc}\frac{a^3}{(a-b)(a-c)}=-\sum\limits_{cyc}\frac{a^3}{(a-b)(c-a)}=-\sum\limits_{cyc}\frac{a^3(b-c)}{\prod\limits_{cyc}(a-b)}=a+b+c\geq3$$<|endoftext|> TITLE: A function that returns the highest prime less than or equal to $n$ QUESTION [8 upvotes]: Overview While playing with numbers in my amateur math studies, I found a remarkable and beautiful function big delta that returns the highest prime number equal to or less than a given natural number. Iterating this function with $n = 1$ to $x$ outputs the list of all prime numbers between $1$ and $x$. The big and small delta functions Given a natural number $x$ with representation $d_t...d_1d_0=\sum_{i=0}^t d_i\cdot b^i$ in base $b$ we define: $$ \delta(x,b):=\sum_{i=0}^t d_i $$ so that $\delta(x,b)$ is the digit sum of $x$'s base $b$ representation. Based on this we define: $$ f(x):=\sum_{b=2}^{x+1}\delta(x,b) $$ and finally $$ \Delta(n):=\max_{2\leq x\leq n}\left[f(x)-f(x-1)\right] $$ Then it appears to be the case that $\Delta(n)$ is the largest prime less than or equal to $n$. The big and small delta functions described A visual rendering of the big delta function A visual rendering of the big delta function Experimental proof In order to prove the above experimentally, I wrote an implementation of the small and big delta functions above in Python. It is available on the following GitHub repository: Big Delta GitHub repository Sample output Unfortunately, I can't input large quantities of text here. So here is a sample output of the above script for a very small set of big and small delta functions for numbers up to 15. $$ \begin{array}{|c|ccc|cc|} \hline n&\Delta&df&f&\delta_2&\delta_3&\delta_4&\delta_5&\delta_6 &\delta_7&\delta_8&\delta_9&\delta_{10} &\delta_{11}&\delta_{12}&\delta_{13}&\delta_{14} &\delta_{15}&\delta_{16}\\ \hline 1 & - & - & 1 & 1 \\ 2 & 2 & 2 & 3 & 1 & 2 \\ 3 & 3 & 3 & 6 & 2 & 1 & 3 \\ 4 & 3 & 2 & 8 & 1 & 2 & 1 & 4 \\ 5 & 5 & 5 & 13 & 2 & 3 & 2 & 1 & 5 \\ 6 & 5 & 3 & 16 & 2 & 2 & 3 & 2 & 1 & 6 \\ 7 & 7 & 7 & 23 & 3 & 3 & 4 & 3 & 2 & 1 & 7 \\ 8 & 7 & 2 & 25 & 1 & 4 & 2 & 4 & 3 & 2 & 1 & 8 \\ 9 & 7 & 5 & 30 & 2 & 1 & 3 & 5 & 4 & 3 & 2 & 1 & 9 \\ 10 & 7 & 5 & 35 & 2 & 2 & 4 & 2 & 5 & 4 & 3 & 2 & 1 & 10 \\ 11 & 11 & 11 & 46 & 3 & 3 & 5 & 3 & 6 & 5 & 4 & 3 & 2 & 1 & 11 \\ 12 & 11 & 0 & 46 & 2 & 2 & 3 & 4 & 2 & 6 & 5 & 4 & 3 & 2 & 1 & 12 \\ 13 & 13 & 13 & 59 & 3 & 3 & 4 & 5 & 3 & 7 & 6 & 5 & 4 & 3 & 2 & 1 & 13 \\ 14 & 13 & 7 & 66 & 3 & 4 & 5 & 6 & 4 & 2 & 7 & 6 & 5 & 4 & 3 & 2 & 1 & 14 \\ 15 & 13 & 9 & 75 & 4 & 3 & 6 & 3 & 5 & 3 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1 & 15 \\ \hline \end{array} $$ where in this table $df$ is shorthand for $f(n)-f(n-1)$ and $\delta_b$ is shorthand for $\delta(n,b)$. Experimental results I upload my experimental results here: Big Delta Github repository It contains various outputs for [n]={1...31}, [n]={1...1024}, up to ~132'000. Question 1 The above is no more than a conjecture because I did not provide a formal proof. In consequence, my main question is: how can we demonstrate this? Question 2 Is this relationship found between the sums of digits in multiple bases and the set of prime numbers a well-known relationship or is it an original discovery? REPLY [10 votes]: The Growth of $\delta_b$ Let us write $\delta_b(n)$ instead of $\delta(n,b)$ to mean the digit sum of the base $b$ representation of $n$. Then it turns out that we have $$ \delta_b(n)=n-\beta_b(n)\cdot(b-1)\tag1 $$ where $\beta_b(n)$ counts the number of times $b$ divides $1,2,...,n$ (counted by multiplicity). To see this, let us use the notation $n=(d_t,...,d_1,d_0)$ to mean $n=\sum_{i=0}^k d_i\cdot b^i$, and then consider $n$ of the form: $$ n=(d_t,...,d_k,0,...,0), \quad\text{where }d_k\neq 0 $$ for such $n$ we have $$ n-1=(d_t,...,d_k-1,b-1,...,b-1) $$ Comparing those we have: $$ \delta_b(n)=\delta_b(n-1)+1-k\cdot(b-1) $$ Therefore $\delta_b(n)$ is $1$ greater than $\delta_b(n-1)$ minus $b-1$ times the multiplicity $k$ of how often $b$ divides $n$. If $b$ does NOT divide $n$, we have $$ \delta_b(n)=\delta_b(n-1)+1\tag2 $$ Hence, as $n$ increases by $1$ repeatedly, $\delta_b(n)$ is increased by $1$ each time (starting from $\delta_b(1)=1$), but whenever $n$ is divisible by $b$ by a multiplicity of $k$, then $k$ times $b-1$ is subtracted. The claim expressed in $(1)$ follows. Note also that $$ \delta_b(b)=1\tag3 $$ These results lead to the following proof: Proof of Correctness of Algorithm Let $p$ be a prime number. Then $$ f(p)-f(p-1)=p\tag4 $$ This can be seen by first considering $\delta_b(p)$ vs. $\delta_b(p-1)$ for $b\leq p-1$. Since such $b$ does not divide $p$, we have from $(2)$ that $$ \delta_b(p)=\delta_b(p-1)+1 $$ and therefore $$ \sum_{b=2}^{p-1}\delta_b(p)=p-2+\sum_{b=2}^{p-1}\delta_b(p-1) $$ Now, since $\delta_p(p)=1$ and $\delta_p(p-1)=p-1$ which again has a difference of $p-2$, it follows that $$ \begin{aligned} f(p)-p&=f(p)-\delta_{p+1}(p)\\ &=\sum_{b=2}^p\delta_b(p)\\ &=\sum_{b=2}^p\delta_b(p-1)\\ &=f(p-1) \end{aligned} $$ and the claim in $(4)$ follows. If on the other hand $n$ is composite having some $2\leq q TITLE: What does $\mathbf G_m$ really mean? QUESTION [9 upvotes]: My understanding is that $\mathbf G_m$ stands for $k^*$ (multiplicative group of the field $k$) as a group scheme. But I have also seen symobols like $H^1(X_{et},\mathbf G_m)$? Is this taking about cohomology groups of $X_{et}$ on constant sheaf with values in $\mathbf G_m$? REPLY [12 votes]: The first instance is the group scheme $\mathbb{G}_m := \mathrm{Spec}(k[x^{\pm 1}])$. Associated with this (abelian) group scheme there is its functor of points which induces a sheaf (for the étale topology in the OPs notation) of abelian groups. Note that if $X$ is a scheme over $k$, then the morphisms $X\to \mathrm{Spec}(k[x^{\pm 1}])$ are in canonical bijection with the $k$-algebra homomorphisms $k[x^{\pm 1}]\to\mathcal{O}_X(X)$; thus, with the set of units $\mathcal{O}_X(X)^\times$. Consequently, if we were talking about this sheaf on a fixed scheme $X$ with the Zariski topology, then the notation $\mathcal{O}^{\times}_{X}$ would be more common than $\mathbb{G}_m$. We use $\mathbb{G}_{m}$ to emphasise that it's the sheaf of groups for some site which may not be the usual Zariski topology.<|endoftext|> TITLE: What is the right way to approximate $e^{-1/x^2}$ by polynomials? QUESTION [8 upvotes]: It's well known that $f(x) = e^{-1/x^2}$ (with zero added) is a smooth function that's not analytic at $x=0$, because every derivative at zero is zero, and so all of its Taylor polynomials are zero. For the sake of simplicity fix the center at zero for the rest of the question. This function isn't all that pathological and it seems like there should still be a principled way to approximate it by polynomials by using some other natural data about $f$. More concretely, what is a method for approximating a smooth function $f(x)$ by polynomials that has the following properties: Optimal by some natural criterion (analogous to how the degree-$k$ Taylor polynomial is optimal among degree $k$ polynomials on a sufficiently small interval) Graded, i.e. there is a parameter $n$ so that larger values of $n$ use higher-degree polynomials and improve the approximation. Efficiently computable, i.e., there is a $\text{poly}(n)$-time algorithm which constructs the polynomial representation from the input parameter of $n$ Nontriviality for $e^{-1/x^2}$ REPLY [2 votes]: Notice that as $x$ approaches $0$ along the imaginary axis, this function approaches $\infty$, so its bad behavior is close to home even if not on the real axis. For every polynomial function $p(x)$ that is not identically $0$, there exists $\varepsilon>0$ such that for every $x$ within the neighborhood $(-\varepsilon,+\varepsilon)$ with the possible exception of $x=0$, the value of $p(x)$ is farther from $f(x)$ than $0$ is from $f(x)$. Thus for any polynomial approximation to be better than the identically $0$ function, you'd have to have some other criterion of what is "better".<|endoftext|> TITLE: Area in axiomatic geometry QUESTION [5 upvotes]: Let's say we have axiomatic geometry as defined by Hilbert's axioms. For line segments, angles, triangles, squares, etc. we have the notion of congruency to determine whether two of them are "the same". But this doesn't seem sufficient to determine whether two figures of different shape have the same area. For example, I don't see how the Pythagorean theorem can be proved using only the notion of congruency. So basically my question is: how is area defined in axiomatic geometry? REPLY [5 votes]: From Marvin Jay Greenberg's excellent "Euclidean and Non-Euclidean Geometries" ... What does "area" mean [...]? We can certainly say intuitively that it is a way of assigning to every triangle a certain positive number called its area, and we want this area function to have the following properties: Invariance under congruence. Congruent triangle have the same area. Additivity. If a triangle $T$ is split into two triangles $T_1$ and $T_2$ by a segment joining a vertex to a point on the opposite side, then the area of $T$ is the sum of the areas of $T_1$ and $T_2$. Having defined area, we then ask how it is calculated. [...] Basically, any strategy for assigning values that satisfy (1) and (2) above can be reasonably interpreted as "area" in a geometry. The calculations are what make things interesting. In Euclidean geometry, one derives that the "one-half base-times-height" formula satisfies the necessary conditions. (Note: So does "one-half base-times-height-times-an-arbitrary-positive-constant".) In spherical geometry, one can show that angular excess ---that is, "angle sum, minus $\pi$"--- works as a triangle's area function (up to an arbitrary constant multiplier). (We can do a sanity check with a simple example: A triangle with a vertex at a sphere's North Pole, and with opposite side falling on 1/4 of the Equator, covers one-eighth of that surface of the sphere; therefore, it has area $\frac{1}{8}\cdot 4\pi r^2 = \frac{\pi}{2}r^2$. On the other hand, such a triangle's angular excess is $\left(\frac{\pi}{2}+\frac{\pi}{2}+\frac{\pi}{2}\right) - \pi = \frac{\pi}{2}$, which is, in fact, proportional to the calculated area. (If we work on the unit sphere, we get to ignore the constant of proportionality.)) In hyperbolic geometry, angular defect ---"$\pi$, minus angle sum"--- is the go-to function. (This is harder to check than in the spherical case, so I'll note a fascinating consequence: a triangle with three infinitely-long sides happens to have three angles of measure $0$; therefore, such a triangle's area is finite ... specifically: $\pi$! (Constant of proportionality ignored, to maximize the impact of that statement.)) So, because area calculations are so very different in these contexts, you can't expect a single formula to fall out of the basic axioms. At some point, you observe a phenomenon that satisfies (1) and (2), you declare "this is area", and you go on from there. Importantly, there need not be any direct connection between a geometry's notion of area and that geometry's incarnation of the Pythagorean Theorem. The fact that squares erected on the legs of a Euclidean right triangle have total area equal to that of the square erected upon the hypotenuse is a neat "coincidence". This doesn't happen in non-Euclidean (spherical or hyperbolic) geometry: these spaces don't even allow squares! (See Wikipedia for a discussion of the non-Euclidean counterparts of the Pythagorean Theorem.) I've been a bit informal here, but hopefully I've shown that your question "How is area defined in axiomatic geometry?" is actually quite deep.<|endoftext|> TITLE: Why ZFC+FOL cannot uniquely describe/characterize R or N? QUESTION [13 upvotes]: I find the following text on the Wikipedia page on first order logic: First-order logic is the standard for the formalization of mathematics into axioms and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axioms systems that do fully describe these two structures (that is, categorical axiom systems) can be obtained in stronger logics such as second-order logic. Here, what I want to ask is what does uniquely describe/characterize mean? Why is it that $\textbf{FOL}$ cannot uniquely describe/characterize $\mathbb{R}$ or $\mathbb{N}$? REPLY [2 votes]: Still to some extent $\mathbb N$ is characterisable in FOL plus a small increment. The increment being the non-FOL requirement of being a least structure satisfying some list of axioms (Peano's axioms in the FOL version for $\mathbb N$). What about $\mathbb R$ ?<|endoftext|> TITLE: Is it possible to rewrite $\sin(x) / \sin(y)$ in the form of $\sin(z)$? QUESTION [12 upvotes]: I'm looking to get a particular answer in the form of $\sin(z)$, and I managed to reach an answer in the form $\sin(x)/\sin(y)$. I've checked on a calculator which has confirmed that they're the same number, but how can I convert the fraction into a single sin in order to show that without relying on the calculator? REPLY [17 votes]: You will always have $-1\leq\sin z\leq 1$, while $\sin x/\sin y $ can be any real number or even be undefined when $y=0$. For example, $$\frac {\sin\pi/2}{\sin\pi/4}=\frac1 {1/\sqrt2}=\sqrt2>1$$ and no choice of $z $ will give you $\sin z=\sqrt2$. In the case in which the quotient is between $-1$ and $1$, you can write $$z=\arcsin\left (\frac {\sin x}{\sin y}\right), $$ but I don't think there is a simpler expression.<|endoftext|> TITLE: Primes of the form $2^p-p$ or $2^p+p$ with $p$ prime QUESTION [5 upvotes]: I have to find some prime numbers of the form $2^p-p$ or $2^p+p$ with $p$ a prime number. The question is how many of these prime numbers there are? I have no clue of how this can be done. Thanks and please excuse my English. REPLY [2 votes]: Let $p_1 = 2^p + p$ and $p_2 = 2^p - p$. Trivially, $p = 3$ is a solution. Let $p > 3$. Then either $p$ is of the form $6k+1$ or it is of the form $6k+5$. Case 1: $p$ is of the form $6k+1$. In this case, $p_1 \equiv 2^{6k+1} + 6k + 1 \equiv 64^k*2 + 1 \equiv 4^k*2 + 1 \equiv 4*2 + 1 \equiv 3(\mod 6)$ hence $p_1$ cannot be a prime. Case 2: $p$ is of the form $6k+5$. In this case, $p_2 \equiv 2^{6k+5} - 6k - 5 \equiv 3 (\mod 6)$ hence $p_2$ cannot be a prime. Thus the only solution is when $p = 3$.<|endoftext|> TITLE: Product of three primes that is a square modulo 389 QUESTION [5 upvotes]: Find $n$ such that $n$ is a product of three prime numbers and $n$ is a square modulo $389$. I'm not sure how to begin with this problem. Do I have to use an algorithm involving quadratic reciprocity? How can I apply it? REPLY [6 votes]: By quadratic reciprocity, 2 and 3 are not squares modulo 389, but 5 is, so you can take $n=30$.<|endoftext|> TITLE: $\lim_{x\rightarrow 0}\frac{1-\cos a_{1}x \cdot \cos a_{2}x\cdot \cos a_{3}x\cdot \cdot \cdot \cdot \cdot \cos a_{n}x}{x^2}$ QUESTION [5 upvotes]: $\displaystyle \lim_{x\rightarrow 0}\frac{1-\cos a_{1}x \cdot \cos a_{2}x\cdot \cos a_{3}x\cdot \cdot \cdot \cdot \cdot \cos a_{n}x}{x^2}$ without D l hospital rule and series expansion. i have solved it series expansion of $\cos x$ but want be able to go further without series expansion REPLY [2 votes]: Another approach that I found worth sharing is using multi-variable calculus. However, you should further check the conditions on multi-variable limits. Although the solution involves differentiation, but it's not due to the L'Hospital's rule. Let's define $f(u)$ and $g(x,y)$ as $$f(u)=\cos{(a_1u)}\cdot\cos{(a_2u)}\cdot\ldots\cdot\cos{(a_nu)} =\prod_{i=1}^{n}{\cos(a_iu)}\\ g(x,y)=\frac{f(y)-f(x)}{x^2-y^2}$$ You can easily verify that the limit $L$ in the question, i.e., $$L=\lim_{x\to0}{\frac{1-f(x)}{x^2}}$$ is equivalent to this double-limit form $$L=\lim_{x\to0}{\lim_{y\to0}{g(x,y)}}$$ because the inner limit comes out as $(1-f(x))/x^2$. Now, assuming $g(x,y)$ has a limit at $(0,0)$, the path to reach $(0,0)$ should not matter, we use another path as this $$L=\lim_{y\to0}{\lim_{x\to y}{g(x,y)}}$$ Therefore \begin{aligned} L&=\lim_{y\to0}{\lim_{x\to y}{g(x,y)}}\\ &=\lim_{y\to0}{\lim_{x\to y}{\frac{f(y)-f(x)}{x^2-y^2}}}\\ &=\lim_{y\to0}{\lim_{x\to y}{\frac{f(y)-f(x)}{x-y}\frac1{x+y}}}\\ &=\lim_{y\to0}{\left( \lim_{x\to y}{\frac{f(y)-f(x)}{x-y}}\lim_{x\to y}{\frac1{x+y}} \right)}\\ \end{aligned} The first limit inside the bracket is (by definition) of $-f'(y)$ and the second one equals $1/2y$. So, $L$ would be $$L=-\lim_{y\to0}{\frac{f'(y)}{2y}}$$ The only thing is to find $f'(u)$. Using the differrntiation rule for products gets $$f'(u)=-\sum_{i=1}^{n}{\left(a_i\sin{(a_iu)} \prod_{j=1}^{n}\frac{\cos(a_ju)}{\cos(a_iu)}\right)}$$ By replacing this into the limit we'll have \begin{aligned} L&=\lim_{y\to0}{\sum_{i=1}^{n}{\left(\frac{a_i\sin{(a_iy)}}{2y} \prod_{j=1}^{n}\frac{\cos(a_jy)}{\cos(a_iy)}\right)}} \\ &= \sum_{i=1}^{n}{\left( \lim_{y\to0}{\frac{a_i\sin{(a_iy)}}{2y}}\lim_{y\to0}{\prod_{j=1}^{n}\frac{\cos(a_jy)}{\cos(a_iy)}} \right)} \end{aligned} The first limit equals $a_i^2/2$ and the second one is $1$. Finally, $L$ is $$L=\frac12\sum_{i=1}^{n}{a_i^2}$$<|endoftext|> TITLE: Is the measure of the sum equal to the sum of the measures? QUESTION [9 upvotes]: Let $A,B$ be subsets in $\mathbb{R}$. Is it true that $$m(A+B)=m(A)+m(B)?$$ Provided that the sum is measurable. I think it should not be true, but could not find a counterexample. REPLY [27 votes]: Another example: $$A=\mathbb{Z},\quad B=[0,1],\quad A+B=\mathbb{R}. $$<|endoftext|> TITLE: Compute the integral $\int_{-1}^1 \frac{|x-y|^{\alpha}}{(1 - x^2)^{\frac{1+\alpha}{2}}}dx = \frac{\pi}{\cos(\pi \alpha/2)}$ QUESTION [11 upvotes]: $$ \mbox{How to prove that}\ \int_{-R}^{R}\frac{\left\vert x - y\right\vert^{\alpha}} {\left(R^{2} - x^{2}\right)^{\large\left(1+\alpha\right)/2}} \,\mathrm{d}x = \frac{\pi}{\cos\left(\pi \alpha/2\right)}\ {\Large ?}, $$ where $-1 < \alpha < 1$, $-R \le y \le R$. Since the right hand side does not depend on $y$, I suppose, there must be some physical interpretation. I'll be grateful for any hints. REPLY [2 votes]: Here is a geometric proof of the identity. Let $I$ denote the integral. Substituting $x=R\sin\theta$ and $k=-y/R \in [-1, 1]$, we get $$ I = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \left| \frac{\sin\theta-k}{\cos\theta} \right|^{\alpha} \, \mathrm{d}\theta. $$ In order to simplify $I$, we introduce the function $$ F_k(t) := \operatorname{length}\left(\left\{ \theta \in \left(-\frac{\pi}{2},\frac{\pi}{2}\right) : \left| \frac{\sin\theta-k}{\cos\theta} \right| \leq t\right\}\right) $$ and note that $I$ is recast as $$ I = \int_{0}^{\infty} t^{\alpha} \, \mathrm{d}F_k(t) $$ by the "change of variables". Now we will make use of the following claim: Claim. $F_k$ does not depend on $k \in [-1, 1]$. Once this claim is proved, we may conveniently use $F_0(t) = 2\arctan t$ to find that $$ I = \int_{0}^{\infty} t^{\alpha} \, \mathrm{d}F_0(t) = 2\int_{0}^{\infty} \frac{t^{\alpha}}{t^2+1} \, \mathrm{d}t = \frac{\pi}{\cos(\pi\alpha/2)}, $$ where the last step can be verified in a routine way. $\square$ Proof of Claim. Note that $\frac{\sin\theta-k}{\cos\theta}$ is the slope of the line joining $(0, k)$ to $(\cos\theta, \sin\theta)$. Under this geometric interpretation, $F(t)$ corresponds to the length of the circular arc $AB$ shown below: Let $l \in [-1, 1]$ and assume WLOG that $l < k$. Then $F_k(t) - F_{l}(t)$ is equal to the length of the arc $AC$ minus the length of the arc $BD$: Now we reflect the arc $BD$ about the $y$-axis to obtain the arc $B'D'$. Then the two arcs $AC$ and $B'D'$ are congruent by the symmetry, and hence, they have the same length. This proves that $F_k(t) - F_l(t) = 0$ and therefore the claim follows. $\square$<|endoftext|> TITLE: Ito Isometry against non-Brownian SDE QUESTION [6 upvotes]: Suppose $X_t$ is a Semi-martingale and $H_t$ is $X_t$-predictable. I know that if $X_t=W_t$ is a Wiener process then $$ \mathbb{E}[H\cdot W_T^2] = \mathbb{E}[\int_0^TH_t^2dt], $$ where $H\cdot W_T$ denotes the stochastic integral of $H_t$ against $W_t$ up to time $T$. My question is if $W_t$ is not a Weiner process then what is $$ \mathbb{E}[H\cdot W_T^2] $$ equal to? REPLY [5 votes]: If we assume that $X_t$ is $L^2$ and that $H_t$ is also $L^2$, in their respective senses then we may proceed as follows... The result is again called the Ito isometry and given your setting is as follows: Itô Isometry $$ \mathbb{E}\left[\left( \int_0^T H_t dX_t\right)^2 \right] = \mathbb{E}\left[ \int_0^T H_t^2 d[X]_t \right], $$ where $[X]_t$ denotes the quadratic variation of $X$. Theorem 5 in this blog shows the details of the result. In particular if $X_t$ is an Ito process, that is $X_t$ satisfies the SDE $$ dX_t= \mu_tdt +\Sigma_tdW_t, $$ then $[X_t]=\Sigma^{\star}\Sigma_t$. In this case the Ito isomtery simplifies to $$ \mathbb{E}\left[\left( \int_0^T H_t dX_t\right)^2 \right] = \mathbb{E}\left[ \int_0^T H_t^2 \Sigma^{\star}\Sigma dt \right]. $$ Hope this helped :)<|endoftext|> TITLE: Prove that the chord of the ellipse passes through a fixed point QUESTION [7 upvotes]: Variable pairs of chords at right angles are drawn through a point $P$ (forming an angle of $\pi/4$ with the major axis) on the ellipse $\frac {x^2}{4}+y^2=1$, to meet the ellipse at two points $A $ and $B $. Prove that the line joining these two points passes through a fixed point. I am writing the equations and they are just as filthy as they could be. It's just tedious. I am sure there is something I am missing or some better way to approach it. Thanks. REPLY [4 votes]: This is a property of general ellipses, so let's consider the ellipse with major and minor radii $a$ and $b$. For a given point $P = (a \cos 2\theta, b \sin 2\theta)$, we'll identify the point $Q$ common to all chords $\overline{AB}$ such that $\overline{AP}\perp\overline{BP}$. It's straightforward to find the coordinates of $Q$ at the intersection of two convenient chords, namely: $\overline{A_0 B_0}$, the "other" diagonal of the inscribed rectangle with vertex $P$; and $\overline{PN}$, along the normal to the ellipse at $P$ (which serves as the degenerate case). $$\begin{align} \overline{A_0 B_0}:&\quad b x \sin 2\theta + a y \cos 2\theta = 0 \\ \overline{PN}:&\quad a x \sin 2\theta - b y \cos 2\theta = ( a^2 - b^2 ) \cos 2\theta \sin 2\theta \end{align}$$ $$Q := \overline{A_0 B_0} \cap \overline{PN} = \frac{a^2-b^2}{a^2+b^2} \;\left( a \cos 2\theta, - b \sin 2\theta \right)$$ Now, consider a generic chord $\overline{AB}$ where $A = (a \cos2\alpha, b \sin 2\alpha)$ and $B = (a \cos2\beta, b \sin 2\beta)$. The condition that $\overline{AP}\perp\overline{BP}$ gives rise to this equation $$\begin{align} 0 &= (A-P)\cdot(B-P) \\ &= 4 \sin(\alpha-\theta) \sin(\beta-\theta) \left(\; a^2 \sin(\alpha + \theta) \sin( \beta + \theta ) + b^2 \cos(\alpha + \theta) \cos(\beta + \theta)\;\right) \end{align}$$ where we may safely ignore the initial factors, so that $$a^2 \sin(\alpha + \theta) \sin( \beta + \theta ) + b^2 \cos(\alpha + \theta) \cos(\beta + \theta) = 0 \tag{1}$$ On the other hand, the equation for the line containing $\overline{AB}$ is $$b x \cos(\alpha + \beta) + a y \sin(\alpha + \beta) = a b \cos(\alpha - \beta) $$ so that the condition that $Q$ lies on the chord becomes $$\frac{a^2-b^2}{a^2+b^2}\;\left(\;a b \cos(\alpha + \beta) \cos 2\theta - a b \sin(\alpha + \beta) \sin 2\theta \; \right) = a b \cos(\alpha - \beta) $$ whereupon $$\left(a^2-b^2\right)\cos(\alpha + \beta + 2\theta) = \left(a^2+b^2\right) \cos(\alpha - \beta) \tag{2}$$ Verification that (1) and (2) are equivalent is left as an easy exercise for the reader. $\square$<|endoftext|> TITLE: What is the difference between moment projection and information projection? QUESTION [6 upvotes]: Moment projection is defined as $$\text{arg min}_{q\in Q} D(p||q)$$ while information projection is defined as $$\text{arg min}_{q\in Q} D(q||p)$$. Aside from the difference in the formula, how should one interpret the difference in the two measure intuitively? And when should one use moment projection over information projection, and vice versa? REPLY [4 votes]: Both the M-projection and the I-projection are projections of a probability distribution $p$ into a set of distributions $Q$. They can be defined as the distribution $q $, chosen among all included in the set $Q $, that is "closest" to $P$. Here the concept of "closest" refers to the distribution that mimimizes the relative entropy from $p$ to $q $, which is a well known measure of distance - also called Kullback–Leibler divergence and commonly denoted as $D(p||q)$. In particular, since the relative entropy expresses the information gained when shifting from $ q$ to $p$, the M-projection and the I-projection can be interpreted as the distributions that mimimize the amount of information lost when $q$ is used as a surrogate of $p $. Since the relative entropy as a measure of distance is not symmetric, the M-projection and the I-projection are often different. The main differences between them can be well understood if we take into account what they mimimize in terms of entropy and cross entropy. The M-projection is the distribution $q $ that mimimizes $$D (p||q)=-H_p +E_p (-\log {q}) $$ where $H_p$ is the entropy of the distribution $p $ and $E_p (-\log {q}) $ is the cross entropy between $p$ and $q $. The distribution $q $ that mimimizes this distance usually tends to show high density in all regions that are probable according to $p $ (this is because a small $-\log {q} $ in these regions yields a smaller second term). Also, the distribution $q $ that mimimizes this distance tends to extend over regions with intermediate probability according to $p $ (i.e., it is not strictly concentrated only in the peaks of $p $), because the penalty due to low density in these regions is considerable. The final result is that the M-projection commonly tends to show a relatively large variance. On the other hand, the I-projection is the distribution $q $ that mimimizes $$D (q||p)=-H_q +E_q (-\log {p}) $$ where $H_q$ is the entropy of the distribution $q $ and $E_q (-\log {p})$ is the cross entropy between $q $ and $p$. Although the first term gives some penalty for low entropy of $q $, often the effect of the second term predominates, so that the distribution $q $ that mimimizes this distance usually tends to show very high density in all regions where $p $ is large and very low density in all regions where $p $ is small. In other words, the mass of $q $ tends to be concentrated in the peak region of $p$. The final result is that the I-projection commonly tends to show a relatively small variance. As regards the main applications, both the M-projection and the I-projection play important roles in graphical models. The M-projection is fundamental for learning problems where we have to find a distribution that is closest to the empirical distribution of the data set from which we want to learn. In contrast, the I-projection - easier from a computational point of view - has important applications in information geometry (e.g., thanks to the information-geometric version of Pythagoras' triangle inequality theorem, where the relative entropy is considered as squared distance in a Euclidean space) and to analyze error exponents in various information theory problems such as hypothesis-testing, source coding, and channel coding. Also, it can be used for the management of probability queries, particularly when a distribution $p $ is too complex to allow an efficient answering process. In this case, using a I-projection as an approximation of $p $ may be a good approach to obtain a more efficient elaboration of queries.<|endoftext|> TITLE: Tensor product commutes with direct limits QUESTION [9 upvotes]: Let $(A_n, \phi_n)$ be an inductive system of $C^*$ algebras and let $B$ be an arbitary $C^*$ algebra. Is it true that $(\varinjlim A_n)\otimes B \cong \varinjlim (A_n \otimes B)$? I'm using the notation $\otimes$ for the spatial norm. If not, please give me a counterexample, but if it is true let me prove this by myself... Thank you! REPLY [9 votes]: Let $(A_n,\varphi_{n+1,n})_{n\in \mathbb{N}}$ be an inductive system of $C^*$-algebras with faithful connecting maps and let $B$ be an arbitrary $C^*$ algebra. Then $(\varinjlim A_n)\otimes B \cong \varinjlim (A_n\otimes B)$. Remark: If the connecting maps are not faithful, the claim is not true. Proof: Denote by $(A,\{\varphi^n\})$ the inductive limit of $A_1\xrightarrow{\varphi_{2,1}} A_2 \xrightarrow{\varphi_{3,2}}A_3 \xrightarrow{\varphi_{4,3}}\ldots$. For each $n\in \mathbb{N}$ we have a unique $*$-homomorphism $\varphi_{n+1,n}\otimes id_B:A_n\otimes B \to A_{n+1}\otimes B$ such that $(\varphi_{n+1,n}\otimes id_B)(a\otimes b)=\varphi_{n+1,n}(a)\otimes b$ for all $a\in A_n, b\in B$. As $\varphi_{n+1,n}$ and $id_B$ are injective for all $n$, also $\varphi_{n+1,n}\otimes id_B$ is faithful (By exercises 3.3.5 and 3.4.1 - Brown&Ozawa). We have an inductive system $$A_1\otimes B \xrightarrow{\varphi_{2,1}\otimes id} A_2\otimes B \xrightarrow{\varphi_{3,2}\otimes id}A_3\otimes B \xrightarrow{\varphi_{4,3}\otimes id}\ldots$$ Denote by $(C, \{\psi^n\})$ the inductive limit. We need to show that $C\cong A\otimes B$. For all $n\in \mathbb{N}$ we have a commutative diagram: (We check it commutes by verifying it on elementary tensors and then using linearity and continuity). By the universal property of inductive limits there exists a unique $*$-homomorphism $\lambda:C\to A\otimes B$ making the diagram: commutative. It is left to show that $\lambda$ is a $*$-isomorphism. $\lambda$ is injective: We have $C=\overline{\cup_n{\psi^n(A_n\otimes B)}}$ and $A=\overline{\cup_n{\varphi^n(A_n)}}$. It suffices to check that $\lambda$ is injective on each $\psi^n(A_n\otimes B)$. But, $\overline{\psi^n(A_n\odot B)}=\psi^n(A_n\otimes B)$. So, it suffices to show $\lambda$ is isometric on $\psi^n(A_n\odot B)$, and this is immediate by the commutativity of the diagram and the fact that $\varphi^n\otimes id$ is isometric. $\lambda$ is surjective: As the range of $\lambda$ is closed, it suffices for any $\epsilon>0$ and $y\in A\otimes B$ to find $x\in C$ s.t. $||\lambda(x)-y||<\epsilon$. So, let $z\in A\odot B$ be s.t. $||z-y||_{min}<\epsilon/2$. Write $z=\sum_{i=1}^{m} s_i\otimes b_i$, where $s_i\in A$ and $b_i\in B$. We can find some $n$ and $a_1,...,a_m\in A_n$ s.t. $||\varphi^n(a_i)-s_i||<\frac{\epsilon}{2 max_i{||b_i||} m}$ for all $1\leq i\leq m$. Therefore, $||z-\sum_{i=1}^{m} \varphi^n(a_i)\otimes b_i||=||\sum_{i=1}^{m}(s_i-\varphi^n(a_i))\otimes b_i||\leq \sum_{i=1}^{m} ||s_i-\varphi^n(a_i)|| ||b_i||<\epsilon/2$. Combining the two, we have: $||y-\sum_{i=1}^{m} \varphi^n(a_i)\otimes b_i||<\epsilon$ and $\lambda(\psi^n(\sum_{i=1}^{m} a_i\otimes b_i))=\varphi^n\otimes id (\sum_{i=1}^{m} a_i\otimes b_i)=\sum_{i=1}^{m} \varphi^n(a_i)\otimes b_i$ , as required. I hope my solution is correct, any comments would be appreciated.<|endoftext|> TITLE: Are there infinite number of sizes of gaps between primes? QUESTION [8 upvotes]: Are there an infinite number of sizes of gaps between primes? let $p_n$ be the nth prime number. Let $g_n = p_{n+1} - p_n$ (i.e. size of gaps between consecutive primes). As $p_n$ goes to infinity, does $g_n$ go to infinity also? REPLY [17 votes]: You can easily find as long a string of composites as you wish, so the gaps between primes can be arbitrarily large, so must have infinitely many different values. Consecutive composite numbers But that does not mean the size of the gap goes to infinity. In fact it's less than 70 million infinitely often. https://en.wikipedia.org/wiki/Yitang_Zhang As @DunstanLevenstein comments. 70 million was the bound in Zhang's revolutionary paper. It's since been reduced to 246. It's thought that in fact there are infinitely many twin primes, so the conjecture is that the bound is actually 2.<|endoftext|> TITLE: In what sense is $S^\infty$ the same as $\{x \in \ell_2 : \|x\| = 1 \}$? QUESTION [6 upvotes]: I hear things that sound like topologists equate $S^\infty$ (defined as the union or "directed colimit" of $n$-spheres) with the actual unit sphere in, say, a nice vector space like $\ell_2$. In what sense is this a rigorous statement? Are they homeomorphic? Or is it weaker? REPLY [4 votes]: They're both contractible, and indeed the inclusion $S^\infty \hookrightarrow S(\ell_2)$ is a homotopy equivalence. Now the main value of this isn't that they're both contractible, it's that for most natural group actions on these spheres, this map is equivariant. (In particular, for $\Bbb Z/2$, $S^1$, $\Bbb Z/n \subset S^1$, $S^3$.) The map between the quotient spaces is also a homotopy equivalence. Quotienting on the left gives you CW complexes modeling $BG$ (respectively, $\Bbb{RP}^\infty$, $\Bbb{CP}^\infty$, the infinite lens spaces $L(n) = \Bbb Z/n$, and $\Bbb{HP}^\infty$), and quotienting on the right gives you Hilbert manifolds modeling these spaces. CW complexes are more useful for the purposes of some parts of algebraic topology, like calculating various cohomology theories on these spaces (the CW structure gives you a nice spectral sequence). On the other hand, the Hilbert manifold structure is more useful for parts of differential topology, including Morse theory or anything where you need the notion of smoothness. So both sides are useful, but since they're naturally homotopy equivalent, when you're doing homotopy-invariant things it doesn't much matter which side you use.