TITLE: What's an infinite dimensional or function version of a tensor?
QUESTION [7 upvotes]: A function $f$ is like an infinite dimensional vector with the norm $|f| = \int^b_a f(t)^2 \, \mathrm{d} t $ and dot product $f \cdot g = \int^b_a f(t) g(t) \, \mathrm{d} t $ where appropriate boundaries have to be chosen or you have to restrict functions so that problems with infinities don't pop up.
A linear operator, integral transform or its kernel $K$ is like an infinite dimensional matrix with the application operation being $Kf = \int^b_a K(s, t) f(t) \, \mathrm{d} t$. Matrix multiplication is $KL = \int^b_a K(s, t) L(t, u) \, \mathrm{d} t$. Again, you have to be careful to deal with problems with infinities.
How do we generalize to the next level? What is the tensor version?
In particular, what does tensor contraction and the tensor product mean?
I know tensors are composed out of linear combinations of tensor products of vectors (which are functions in this case.) But I'm confused about what linear combinations means in this case.
Suppose one has a tensor:
$$ T = \sum^n_{k=0} c_k (u_k \otimes v_k) \otimes w_k $$
What exactly does a linear combination mean?
REPLY [7 votes]: Any finite-dimensional vector space $V$ is isomorphic to a coordinate vector space $\mathbb{R}^n$ which can be thought of as the vector space of functions $\{1,\cdots,n\}\to \mathbb{R}$.
Generalizing this, we can use a different index set than $\{1,\cdots,n\}$, for instance we can use a continuous interval as an index set. Then a "coordinate vector" whose indices are from $[0,1]$ should be thought of as a function $[0,1]\to\mathbb{R}$.
The dot product $f\cdot g=\sum_i f_i g_i$ (I will put all my indices downstairs) for $\{1,\cdots,n\}$-indexed coordinate vectors can then be generalized to $\langle f,g\rangle=\int_0^1 f(x)g(x)\,\mathrm{d}x$. Similarly, the equation $f=Ag$ (where $f,g$ are column vectors and $A$ is a matrix), written with indices as $f_i=\sum_j a_{ij}g_j$, with integration replacing summation becomes the kernel $f(x)=\int_0^1 A(x,y)g(y)\,\mathrm{d}y$.
With this perspective in mind, tensor contraction should generalize in an obvious way. For instance, the tensor contraction $\sum_{i,j,k} a_{ijk}b_{ir}c_{js}e_{kl}$ is now $\iiint a(i,j,k)b(i,r)c(j,s)e(k,l) \,\mathrm{d}i\,\mathrm{d}j\,\mathrm{d}k$, which is a function of $r$, $s$ and $l$. Indeed, the original kind of tensor contraction with a discrete set of indices can be considered a special case, since summation is just integration over a finite measure space.
Now let's consider tensor products. If $\mathbb{R}^X$ denotes the vector space of functions $X\to\mathbb{R}$, there is a canonical identification $\mathbb{R}^X\otimes\mathbb{R}^Y\cong\mathbb{R}^{X\times Y}$. There is a bilinear map $\mathbb{R}^X\times\mathbb{R}^Y\to\mathbb{R}^{X\times Y}$ where the pair $(f,g)\in\mathbb{R}^X\times\mathbb{R}^Y$ is sent to the function $X\times Y\to\mathbb{R}$ defined by $h(x,y):=f(x)g(y)$, and this extends to an isomorphism $\mathbb{R}^X\otimes\mathbb{R}^Y\to \mathbb{R}^{X\times Y}$. This works even if $X$ and $Y$ aren't discrete.
Pure tensors $u\otimes v$, then, correspond to separable functions $u(x)v(y)$. So then any arbitrary linear combination $\sum_i c_i(u_i\otimes v_i\otimes w_i)$ would correspond to a linear combination $\sum_i c_iu_i(x)v_i(y)w_i(z)$ of separable functions. (For example in differential equations, we sometimes find the separable solutions first, then superpose them to get the whole solution space.)<|endoftext|>
TITLE: Deriving the mean of the Gumbel Distribution
QUESTION [5 upvotes]: I'm trying to determine an expected value of a random variable related to the Gumbel/Extreme Value Type 1 distribution. I think the answer follows the same process as expected value of the Gumbel itself, but I can't figure out the derivation of the expected value of the Gumbel.
I found here a derivation, but there's a step in the middle where magic happens. Need to understand what's going on there to see if I can apply it to my other problem.
Recall, the density of the Gumbel distribution is $f(x) = e^{-e^{-x}}e^x$. The derivation at the link shows that
$$
\int_{-\infty}^{\infty}x e^{-x} e^{-e^{-x}}dx = - \int_{0}^{\infty}{\ln y}e^{-y}dy\quad [y=e^{-x}]\\
= -\frac{d}{d\alpha}\int_0^\infty y^\alpha e^{-y}dy\bigg|_{\alpha=0}\\
=-\frac{d}{d\alpha}\Gamma(\alpha+1)\bigg|_{\alpha=0}\\
=\Gamma'(1) = \gamma \approx 0.577...
$$
The jump from the first line to the second is the one I can't follow. I've tried doing integration by parts on one or the other to demonstrate the equivalence, but I end up with a floating 1 or infinity.
Thanks in advance!
REPLY [3 votes]: The step in question employs a trick called "differentiating under the integral." The idea is to introduce a parameter in the integrand, and express the integrand as a derivative with respect to this parameter, then change the order of integration and differentiation (assuming certain regularity conditions hold).
Explicitly, suppose we let $f(y,\alpha) = y^\alpha e^{-y}.$ Then $$\frac{\partial f}{\partial \alpha} = y^\alpha \log y \, e^{-y}.$$ Then letting $\alpha = 0$ gives us the integrand in the first line of the solution; hence $$-\int_{y=0}^\infty \log y \, e^{-y} \, dy = -\int_{y=0}^\infty \frac{\partial}{\partial \alpha}\left[y^\alpha e^{-y}\right]_{\alpha = 0} \, dy = -\frac{\partial}{\partial \alpha} \left[\int_{y=0}^\infty y^\alpha e^{-y} \, dy \right]_{\alpha = 0}.$$<|endoftext|>
TITLE: Proving a contraction mapping is a Cauchy sequence
QUESTION [6 upvotes]: Let $\phi(x):[a,b]\rightarrow [a,b]$ be a continuous function. Show that if $\phi(x)$ is a contraction mapping on $[a,b]$ then the sequence $\{x^{(k)}\}$ defined by $x^{(k+1)} = \phi(x^{(k)})$ is a Cauchy sequence.
Attempted solution - Since $\phi(x)$ is a contraction mapping we have $$|x^{(k+1)} - x^{(k)}| = |\phi(x^{(k)}) - \phi(x^{(k-1)})|\leq L|x^{(k)} - x^{(k-1)}|$$ Applying this idea repeatedly we get $$|x^{(k+1)} - x^{(k)}|\leq L^k|x^{(1)} - x^{(0)}|$$ Now consider the term that must be bounded in order to be a Cauchy sequence \begin{align*}
|x^{(m)} - x^{(m+n)}| &= |(x^{(m)} - x^{(m+1)}) + (x^{(m+1)} - x^{(m+2)}) + \ldots + (x^{(m+n-1)} - x^{(m+n)})|\\
&\leq |(x^{(m)} - x^{(m+1)})| + |(x^{(m+1)} - x^{(m+2)})| + \ldots + |(x^{(m+n-1)} - x^{(m+n)})|\\
&\leq (L^m + L^{m+1} + \ldots + L^{m+n-1})|x^{(1)} - x^{(0)}|
\end{align*}
I am not sure how to proceed and show that for some $M$ we can get this inequality to be less than some $\epsilon$.
Any suggestions is greatly appreciated.
REPLY [6 votes]: You are very close to a complete proof. All you need is the "final step".
Here is your proof, completed with the "final step" (in details).
Let $\phi(x):[a,b]\rightarrow [a,b]$ be a continuous function. Show that if $\phi(x)$ is a contraction mapping on $[a,b]$ then the sequence $\{x^{(k)}\}$ defined by $x^{(k+1)} = \phi(x^{(k)})$ is a Cauchy sequence.
Proof - Since $\phi(x)$ is a contraction mapping we have
$$|x^{(k+1)} - x^{(k)}| = |\phi(x^{(k)}) - \phi(x^{(k-1)})|\leq L|x^{(k)} - x^{(k-1)}|$$
where $0\leq L< 1$.
It folows by induction (that is, applying this idea repeatedly) we get $$|x^{(k+1)} - x^{(k)}|\leq L^k|x^{(1)} - x^{(0)}|$$ Now consider the term that must be bounded in order to be a Cauchy sequence \begin{align*}
|x^{(m)} - x^{(m+n)}| &= |(x^{(m)} - x^{(m+1)}) + (x^{(m+1)} - x^{(m+2)}) + \ldots + (x^{(m+n-1)} - x^{(m+n)})|\\
&\leq |(x^{(m)} - x^{(m+1)})| + |(x^{(m+1)} - x^{(m+2)})| + \ldots + |(x^{(m+n-1)} - x^{(m+n)})|\\
&\leq (L^m + L^{m+1} + \ldots + L^{m+n-1})|x^{(1)} - x^{(0)}|= \\
& =\left( \sum_{k=m}^{m+n-1}L^k \right) |x^{(1)} - x^{(0)}| \leq
\left( \sum_{k=m}^{\infty}L^k \right) |x^{(1)} - x^{(0)}| = \\
& = \frac{L^m}{1-L}|x^{(1)} - x^{(0)}|
\end{align*}
Given $\varepsilon >0$, since $0\leq L <1$, there is $M\in \mathbb{N}$ such that for all $m>M$ and all $n \in \mathbb{N}$,
$$|x^{(m)} - x^{(m+n)}| \leq \frac{L^m}{1-L}|x^{(1)} - x^{(0)}| \leq \varepsilon$$
So the sequence $\{x^{(k)}\}$ is a Cauchy sequence.<|endoftext|>
TITLE: Feasible point of a system of linear inequalities
QUESTION [5 upvotes]: Let $P$ denote $(x,y,z)\in \mathbb R^3$, which satisfies the inequalities:
$$-2x+y+z\leq 4$$ $$x \geq 1$$ $$y\geq2$$ $$ z \geq 3 $$ $$x-2y+z \leq 1$$ $$ 2x+2y-z \leq 5$$
How do I find an interior point in $P$?
Is there a specific method, or should I just try some random combinations and then logically find an interior point?
REPLY [2 votes]: You could also use the simplex method to solve the following problem:
$$
\min\limits_{x,y,z,e,a}\quad Z= a_1+a_2+a_3
$$
subject to
$$-2x+y+z+e_1 =4$$
$$x -e_2+a_1= 1$$
$$y-e_3+a_2=2$$
$$ z-e_4+a_3 = 3 $$
$$x-2y+z +e_5= 1$$
$$ 2x+2y-z +e_6= 5$$
$$
x,y,z,e,a\ge 0
$$
If $\min \{a_1+a_2+a_3\} = 0$, then you have a feasible point (with $a_1=a_2=a_3=0$).<|endoftext|>
TITLE: X={1,2,3}. Give a list of topologies on X such that every topology on X is homeomorphic to exactly one on your list.
QUESTION [5 upvotes]: I'm teaching my self topology with the aid of a book. I'm trying to do the following problem:
Let X={1,2,3}. Give a list of topologies on X such that every topology on
X is homeomorphic to exactly one on your list.
I'm not sure If I totally understand what is being asked, but I'm going to attempt to list every topology in groups that are homeomorphic to one another.
I want to know if this is correct.
(A) trivial topology. $\mathscr{T}=${X,$\varnothing$}; I can't think of anything else that is homeomorphic to this one.
(B) "singles"
(B1): $\mathscr{T}=${X,$\varnothing$,{1}};
(B2): $\mathscr{T}=${X,$\varnothing$,{2}};
(B3): $\mathscr{T}=${X,$\varnothing$,{3}};
(C) "doubles"
(C1): $\mathscr{T}=${X,$\varnothing$,{1,2}};
(C2): $\mathscr{T}=${X,$\varnothing$,{2,3}};
(C3): $\mathscr{T}=${X,$\varnothing$,{3,1}};
(D) "single-doubles"
(D1): $\mathscr{T}=${X,$\varnothing$,{1},{1,2}};
(D2): $\mathscr{T}=${X,$\varnothing$,{1},{1,3}};
(D3): $\mathscr{T}=${X,$\varnothing$,{2},{2,1}};
(D4): $\mathscr{T}=${X,$\varnothing$,{2},{2,3}};
(D5): $\mathscr{T}=${X,$\varnothing$,{3},{3,1}};
(D6): $\mathscr{T}=${X,$\varnothing$,{3},{3,2}};
(D') "single-doubles (disjoint)"
(D'1): $\mathscr{T}=${X,$\varnothing$,{3},{1,2}};
(D'2): $\mathscr{T}=${X,$\varnothing$,{2},{1,3}};
(D'3): $\mathscr{T}=${X,$\varnothing$,{1},{2,3}};
(E) "single-single-doubles"
(E1): $\mathscr{T}=${X,$\varnothing$,{1},{2},{1,2}};
(E2): $\mathscr{T}=${X,$\varnothing$,{1},{3},{1,3}};
(E3): $\mathscr{T}=${X,$\varnothing$,{2},{3},{2,3}};
(F) "single-double-doubles"
(F1): $\mathscr{T}=${X,$\varnothing$,{1},{1,2},{1,3}};
(F2): $\mathscr{T}=${X,$\varnothing$,{2},{2,1},{3,2}};
(F3): $\mathscr{T}=${X,$\varnothing$,{3},{3,2},{3,1}};
(G) "single-single-double-doubles"
(G1): $\mathscr{T}=${X,$\varnothing$,{1},{2},{1,2},{2,3}};
(G2): $\mathscr{T}=${X,$\varnothing$,{1},{2},{1,2},{3,1}};
(G3): $\mathscr{T}=${X,$\varnothing$,{1},{3},{1,2},{3,1}};
(G4): $\mathscr{T}=${X,$\varnothing$,{1},{3},{2,3},{3,1}};
(G5): $\mathscr{T}=${X,$\varnothing$,{2},{3},{2,3},{3,1}};
(G6): $\mathscr{T}=${X,$\varnothing$,{2},{3},{1,2},{2,3}};
(H) power set: $\mathscr{T}=${X,$\varnothing$,{1}, {2},{3},{1,2},{2,3},{3,1}};; I can't think of anything else that is homeomorphic to this one.
IS this a complete list of all topologies on X?
REPLY [6 votes]: You’re missing the ones homeomorphic to $\big\{\varnothing,X,\{1\},\{2,3\}\big\}$; there are $3$ of those. Also, your (E) group lists one topology twice: (E1) and (E3) are the same. The question wants you to list one topology from each of the $9$ groups (including the group that I just added).<|endoftext|>
TITLE: Any undirected graph on 9 vertices with minimum degree at least 5 contains a subgraph $K_4$?
QUESTION [5 upvotes]: Let $G$ be simple undirected graph with degree of every vertices is at least 5. Prove or disprove that $G$ contains subgraph $K_4$.
I came up with this question when I were trying to find Ramsey number $R(4,3)$. I think my conjecture is correct but I am unable to prove it. If anyone have any idea please share with me. Thank you in advance !
REPLY [2 votes]: If $\chi(G)=3$, the graph cannot contain any $K_4$ as a subgraph:
The above graph is just a $K_9$ with only $9$ edges removed, in particular a $K_{3,3,3}$, with chromatic number $3$, as clear from the picture (no couple of vertices with the same colour is joined by an edge, but the graph has plenty of embedded triangles).
On the other hand, it is not difficult to show that any graph on $9$ vertices with more than $27$ edges has a subgraph isomorphic to $K_4$. Quite disappointing and not even surprising, since a graph fulfilling such constraints is full of triangles and almost complete:
REPLY [2 votes]: Clearly, since the sum of degrees across the 9 vertices must be even, there must be a vertex with degree at least $6$, meaning that only 2 vertices are isolated from this vertex. Choose the vertex of highest degree (degree-$6$ or higher) and label it as $A$ and the vertices connected it, the "linked set", as $\{B,C,D,E,F,G\}$. Each of these will have an additional minimum $4$ links to make in addition to the link to $A$, so at least $2$ of these in each case are within the linked set. So this gives 6 vertices and minimum 6 edges in the linked set, which can be connect as a ring to avoid triangles. Then the other two verrtices, $\{H,I\}$ can be connected to each point on the ring (but not to each other) to avoid any $K_4$.
A diagram of this:<|endoftext|>
TITLE: Why does $(128)!$ equal the product of these binomial coefficients $128! = \binom{128}{64}\binom{64}{32}^2 \dots \binom21^{64}$?
QUESTION [9 upvotes]: I'm working through some combinatorics practice sets and found the following problem that I can't make heads or tails of.
It asks to prove the following:
$$128! = \binom{128}{64}\binom{64}{32}^2\binom{32}{16}^4\binom{16}8^8\binom 84^{16}\binom 42^{32}\binom{2}{1}^{64}$$
Weird, huh? The first thing I noticed is that the exponents mirror the $r$ variables. I would normally just re-express each statement in $\frac{n!}{(n-r)!r!}$ form, but the exponents throw me for a loop. Are there any intuitions about factorials or nCr I should be considering here?
REPLY [10 votes]: Divide $128$ items in half, and assign one half a $1$ bit in the first digit and the other a $0$ bit. Then divide each half in half again, and in each half assign one half a $1$ bit in the second digit and the other a $0$ bit. Continue until the halves consist of single elements. Now each element has been assigned a binary number from $0$ to $127$. The left-hand side counts the number of ways of assigning the numbers, and the right-hand side counts the number of ways of performing the subdivisions.<|endoftext|>
TITLE: Linear transformation $T$ such that for every extension $\overline{T}$, $\|\overline{T}\|>\|T\|$.
QUESTION [8 upvotes]: Let $E$ and $F$ be normed spaces such that $\dim F < \infty$, $G$ a subspace of $E$ and $T:G\rightarrow F$ a continuous linear map. I know that there exists a continuous linear extension $\overline{T}:E\rightarrow F$. Also, if $E$ is a Hilbert space, then $\overline{T}$ can be chosen in the way that $\|\overline{T}\|=\|T\|$.
Problem: Find an example of $E$, $F$, $G$ and $T$ (like above) such that every continuous linear extension $\overline{T}$ has a greater norm, i.e. $\|\overline{T}\|>\|T\|$.
Now, $F$ must be at least a 2-dimensional space, otherwise I could use Hahn-Banach to find an extension with equal norm. My professor told me it could be done with $E$ of finite dimension. Of course, I tried to come up with an example of $E$ with a norm that doesn't satisfy the parallelogram law. For example, $E=\left(\mathbb{R}^3,\|\cdot\|_{\infty}\right)$ and $F=\left(\mathbb{R}^2,\|\cdot\|_1\right)$. But I couldn't prove that it works with any example I tried using those spaces.
Can somebody help me to find an example and assure that it really has that property?
EDIT:
Apparently, it can't be done with $E$ of finite dimension nor with $F$ equipped with the $\sup$ norm, as @Hamza proved below.
REPLY [4 votes]: Let $E=\mathbb R^3$, equipped with the sup norm, namely
$$
\|(x,y,z)\|_\infty = \max\{|x|, |y|, |z| \},
$$
and let
$$
F=G=\{(x, y, z)\in E : x+y+z=0\},
$$
equipped with the induced norm from $E$. Also let
$$
T:G\to F
$$
be the identity function. Then clearly $\|T\|=1$, but any extension $\bar T:E\to F$ has norm strictly bigger than
one.
The reason is that any such $\bar T$ is necessarily a projection from $E$ to $G$ and there is no
such projection with norm $1$.
The best way to convince oneself of this fact is to make a cardboard
model of this cube
cut it along the red line, place one of the two halves on top of the table with the red hexagon down,
and attempt to shine a flashlight so that the shadow is restricted to within the hexagon. It is impossible!
This is based on another answer I recently gave to this question.<|endoftext|>
TITLE: How can I rewrite recursive function as single formula?
QUESTION [7 upvotes]: There is following recursive function
$$
\begin{equation}
a_n=
\begin{cases}
-1, & \text{if}\ n = 0 \\
1, & \text{if}\ n = 1\\
10a_{n-1}-21a_{n-2}, & \text{if}\ n \geq 2
\end{cases}
\end{equation}
$$
I know this can be rewritten as
$$
a_n=7^n-2\cdot3^n
$$
But how can I reach that statement? I found this problem on some particular website. My skills are not enough to solve such things. Someone told me I have to read about Generating function but it didn't help me.
I would be thankful if someone explained it to me.
REPLY [4 votes]: This is a homogeneous linear recurrence relation with constant coefficients. From
$$
a_n = 10 a_{n-1} -21 a_{n-2}
$$
you can infer the order $d=2$ and the characteristic polynomial:
$$
p(t) = t^2 - 10 t + 21
$$
Calculating the roots:
$$
0 = p(t) = (t - 5)^2 - 25 + 21 \iff
t = 5 \pm 2
$$
this gives the general solution
$$
a_n = k_1 3^n + k_2 7^n
$$
The two constants have to be determined from two initial conditions:
$$
-1 = a_0 = k_1 + k_2 \\
1 = a_1 = 3 k_1 + 7 k_2
$$
This leads to $4 = 4 k_2$ or $k_2 = 1$ and thus $k_1 = -2$.
So we get
$$
a_n = -2 \cdot 3^n + 7^n
$$<|endoftext|>
TITLE: Prove that Standard Deviation is always $\geq$ Mean Absolute Deviation
QUESTION [5 upvotes]: Where $$s = \sqrt{ \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2}$$ and
$$ M = \frac{1}{n} \sum_{i=1}^{n} |x_i - \bar{x}|$$
I came up with a sketchy proof for the case of $2$ values, but I would like a way to generalize (my "proof" unfortunately doesn't, as far as I can tell).
Proof for $2$ values (I would appreciate feedback on this as well):
$$\frac{1}{\sqrt{2}} \sqrt{(x_1 - \bar{x})^2 + (x_2 - \bar{x})^2} \geq \frac{1}{2} (|x_1- \bar{x}| + |x_2- \bar{x}|)$$
Now let $|x_1- \bar{x}| = a$ and $|x_2- \bar{x}| = b$ be the $2$ legs of a right triangle and $\sqrt{(x_1 - \bar{x})^2 + (x_2 - \bar{x})^2} = c$ its hypothenuse.
And let $\theta$ be the angle between $c$ and either $a$ or $b$.
Then $(\sin{\theta} + \cos{\theta}) = \frac{a}{c} + \frac{b}{c} = \frac{a+b}{c} = \frac{\frac{1}{2} (|x_1- \bar{x}| + |x_2- \bar{x}|)}{\frac{1}{\sqrt{2}} \sqrt{(x_1 - \bar{x})^2 + (x_2 - \bar{x})^2}}$
so that $$ \sqrt{2} \geq (\sin{\theta} + \cos{\theta})$$
And we know that $\max(\sin{\theta} + \cos{\theta}) = \sqrt{2}$. QED?
I have no idea how to prove the general case, though.
REPLY [5 votes]: Let $y_i = x_i - \bar{x}$ then by Cauchy-Schwarz we have the following:
$\sum_{i=1}^n |y_i| \frac{1}{n} \leq \|y\|_2 \sqrt{\sum_{i=1}^n \frac{1}{n^2}} = \sqrt{\frac{1}{n}\sum_{i=1}^n |x_i - \bar{x}|^2}$<|endoftext|>
TITLE: Does associativity imply closure?
QUESTION [6 upvotes]: Does associativity of binary operation imply closure under this operation?
Sometimes definitions of semigroup, group or vector space omit axiom of closure under corresponding operations and sometimes they don't.
One of the arguments for omitting the axiom that I found is that associativity implies closure.
As a possible proof, let + be binary operation on set A. Assume a, b and c are elements of A. Also (b + c) is in set but (a + b) is not in a set.
Then a + (b + c) is well-defined (even though the result can be out of set).
However, if we assume that + is associative, we will get:
a + (b + c) = (a + b) + c
But that is not true because second "addition" in right part is not defined for (a + b) that is out of set and c.
So for associativity, (a + b) must be in set.
Does this argument make sense? Is it true?
UPDATE: In a possible proof the error is in the first assumption. If + is binary operation on A and a and b are in A then (a + b) must be in set by definition of binary operation (A x A -> A).
REPLY [4 votes]: Associativity does not imply closure - both are characteristics required of a set to form a group. Both (usually) need to be verified to show that a set forms a group under said binary operation, although it is the binary operation that usually implies the closure property.<|endoftext|>
TITLE: Polynomial ring with arbitrarily many variables in ZF
QUESTION [14 upvotes]: For a given field $k$ and a set $X$ we want to define the ring $k[X]$ of polynomials with $X$ as the set of variables. We do not assume $X$ to be finite. And we want to do this without employing axiom of choice.
Informally, the elements of $k[X]$ will be finite sums of monomials of the form $cx_1^{k_1}\dots x_n^{k_n}$, where each monomial is determined by a coefficient $c\in k$, finitely many elements $x_1,\dots,x_n\in X$ and the exponents $k_1,\dots,k_n$, which are positive integers. Addition and multiplication of polynomials from $k[X]$ will be defined in the natural way. However, we also should be able to describe this algebraic structure more formally. Especially if we are trying to use it in some proof in the axiomatic system ZF. In this case it is also important to check that we have not used AC anywhere in the proof. (Using Axiom of Choice can easily be overlooked, especially if somebody is used to work in ZFC rather than ZF, i.e., without the restriction that AC should be avoided.)
To explain a bit better what I mean, this is similar to defining the polynomial ring $k[x]$ of polynomials in a single variable $x$. Informally, we view polynomials as expressions of the form $a_nx^n+\dots+a_1x+a_0$ (with $a_i\in k$). And we will also write them in this way. But formally they are sequences of elements of $K$ with finite support.
I will also provide below a suggestion how to construct $k[X]$ in ZF. I would be interested in any comments on my approach, but also if there are different ways to do this, I'd be glad to hear about them.
This cropped up in a discussion with some colleagues of mine.
Transfinite induction and direct limit
One colleague suggested the following approach, which clearly uses AC (in the form of the well-ordering theorem). But he said that this is the construction of $k[X]$ which seems the most natural to him.
We take any well-ordering of the set $X=\{x_\beta; \beta<\alpha\}$.
By a transfinite induction we define rings $k_\beta$ for $\beta\le\alpha$ and also an embeddings $k_\beta \hookrightarrow k_{\beta'}$ for any $\beta<\beta'<\alpha$. The ring $k_\beta$ is supposed to represent the polynomials using only variables $x_\gamma$ for $\gamma\le\beta$. We put $k_0=k[x_0]$. Similarly if $\beta$ is a successor ordinal we can define $k_\beta=k_{\beta-1}[x_\beta]$. If $\beta$ is a limit ordinal, then we can take $k_\beta$ as a direct limit of $k_\gamma$, $\gamma<\beta$.
Then the ring $k_\alpha$ is $k[X]$ which we wanted to construct.
It is not immediately clear to me whether the proof can be simplified in the way that the direct limit can be replaced by union. However, I do not consider this to be an important difference, since using direct limit (especially in such a simple case, with linear order and embeddings) seem to me to be a rather standard approach for this type of constructions. And anybody with enough mathematical maturity to study a proofs of this level will probably not have a problem with the notion of direct limit.
The fact that this is indeed a ring (or even integral domain) follows from the fact that these properties are preserved by this simple version of direct limits. (I.e., direct limit based on linearly ordered system of rings with embeddings between them. This does not differ substantially from the proof that union of chain of rings is a ring.)
Functions with finite support
I have suggested this approach, which is more closely modeled after the case of ring in a single variable. Unless I missed something, this can be done in ZF, i.e., without use of ZFC.
Let us first try to definite the set $M$ of all monomials of the form $x_1^{k_1}\dots x_n^{k_n}$. (I.e., the monomials with the coefficient $1$.) Every such monomial is uniquely determined by a finite subset $F\subseteq X$ and a function $g: F\mapsto\mathbb N$, where $\mathbb N=\{1,2,\dots\}$. Or, if you will, $\mathbb N=\omega\setminus\{0\}$. (Since we are talking about finite sets, it might be worth mentioning that there are several notions of finite set in ZF. We take the standard one, which is sometimes called Tarski-finite or Kuratowski-limit. This notion of finiteness is well behaved. For our purposes it is important to know that union of finite set of finite sets is again finite and the same is true for Cartesian product.)
So we can get $M$ as a set of all pairs $(F,g)$ with the properties described above. Existence of such sets can be defined in ZF in a rather straightforward manner. (All properties of $F$ and $g$ can be described by a formula in a language of set theory. Clearly $F\in\mathcal P(X)$. Or we can use the set $\mathcal P^{<\omega}(X)$ of finite subsets of $X$ instead. The function $g$ belongs to the set of all functions from such $F$'s to $\mathbb N$. For each $F$ we have the set $\mathbb N^F$ consisting of all functions $F\to\mathbb N$. Then we can simply take the union $G=\bigcup\limits_{F\in\mathcal P(X)} \mathbb N^F$, based on axiom of union. The we use axiom scheme of specification to get only those pairs from $\mathcal P(X)\times G$ which have the required properties.)
Now we have the set $M$. We want to model somehow the finite sums of elements from $M$ multiplied by a coefficents from $k$. To this end we simply take the functions from $M$ to $k$ with finite support.
So far we have only defined the underlying set $k[X]$. We still need to define addition, multiplication, verify that this is integral domain. However, any polynomial $p\in k[X]$ only uses finitely many variables, since we have finitely many monomials and each of them only contains finitely many variables. If we are verifying closure under addition or multiplication, or some properties of integral domain such as associativity or distributivity, then any such condition only includes finitely many polynomials and thus we have only finitely many variables. So we can look at this condition as property of polynomials in $k[F]$, where $F$ is some finite subsets. Assuming we already know that polynomial ring in finitely many variables over a field $k$ is an integral domain, this argument can be used to argue that $k[X]$ is an integral domain, too.
The above discussion occurred in connection with the proof of Andreas Blass' result that existence of Hamel basis for vector space over arbitrary fields implies Axiom of Choice. This proof can be found for example in the references below. It is also briefly described in this answer.
In this proof the polynomials from $k[X]$ are used. (Then the field $k(X)$ of all rational functions in variables from $X$ is created - in the other words, the quotient field of $k[X]$. And the proof than uses existence of a Hamel basis of $k(X)$ considered as a vector space over a particular subfield of $k(X)$.)
Unless I missed something, the proofs given there do not discuss whether $k[X]$ can be constructed without AC. Which suggests that the authors considered this point to be simple enough to be filled in by a reader. So I assume that proof of this fact should not be too difficult. (Of course, if you know of another reference for a proof this results which also discusses this issue, I'd be glad to learn about it.)
A. Blass: Existence of bases implies the axiom of choice. Contemporary Mathematics, 31:31–33, 1984. Available on the authors website
Theorem 5.4 in L. Halbeisen: Combinatorial Set Theory, Springer, 2012. The book is freely available on the author's website.
Theorem 4.44 in H. Herrlich Axiom of choice, Springer, 2006, (Lecture Notes in Mathematics 1876).
There are these related questions:
Polynomial ring with uncountable indeterminates.
Polynomial ring indexed by an arbitrary set.
The answers given there can be considered somewhat similar to the approach I suggested above. However, it is not discussed there whether AC was used somewhere in this construction.
REPLY [10 votes]: Your "functions with finite support" approach is the standard approach to take to this, and is very likely what any author who considers the question trivial has in mind. The correct notion of "finite" to take when defining monomials is the usual one, as you have done (this is necessary for the polynomial ring you get to be the free commutative $k$-algebra on its variables).
Just to give some evidence that this approach is standard, this is the definition of $k[X]$ used in Lang's Algebra, for instance. (Lang describes this construction rather tersely on page 106 as an example of the more general monoid algebra construction described on pages 104-5. Lang's construction of the set $M$ of monomials is a little different from yours: he defines it as a subset of the free abelian group on $X$, which he constructs as the group of finite-support functions $X\to\mathbb{Z}$ on page 38.)
Note that it is not actually necessary to observe that every polynomial involves only finitely many variables to define or verify the properties of addition and multiplication. Indeed, addition can just be defined pointwise on the coefficients and makes sense for arbitrary functions from $M$ to $k$. Multiplication also makes sense for arbitrary functions from $M$ to $k$ (which you can think of as formal power series): you just need to define the coefficient of a given monomial $x_1^{k_1}\dots x_n^{k_n}$ in a product, and you can do this by the usual convolution formula. You then just have to check that if two functions $M\to k$ each have finite support, then so does their product; this is straightforward (a monomial in the support of the product must be the product of monomials in the support of each of the factors). The verification of the basic properties of multiplication then works exactly as it does in the case of finitely many variables.<|endoftext|>
TITLE: Number of solutions to this nice equation $\varphi(n)+\tau(n^2)=n$
QUESTION [5 upvotes]: How many natural numbers $n$ satisfy the equation$$\varphi(n)+\tau(n^2)=n$$where $\varphi$ is the Euler's totient function and $\tau$ is the divisor function i.e. number of divisors of an integer.
I made this equation and I think it is not hard. I haven't solved this completely yet, so I want you to work on this along with me. I'd love to see your solutions!
REPLY [2 votes]: Hint: You can show that $n$ is odd and has at most two (distinct) prime factors. Then, it follows that $n=21$ and $n=25$ are the only solutions.
Further Hint: Suppose $n=p^aq^br^cm$ where $p,q,r,a,b,c\in\mathbb{N}$ with $p,q,r$ being pairwise distinct primes, and $m\in\mathbb{N}$ is not divisible by $p$, $q$, or $r$. Prove that
$$n-\phi(n)> n\left(\frac{1}{2p}+\frac{1}{q}+\frac{1}{r}\right)>\tau\left(n^2\right)\,,$$
provided that $p,q,r$ are the smallest primes dividing $n$ with $2
n\Biggl(\left(\frac{1}{p}-\frac{1}{pq}-\frac{1}{pr}-\frac{1}{qr}\right)+\frac{1}{q}+\frac{1}{r}\Biggr)\\&>n\Biggl(\left(\frac{1}{p}-\frac{1}{5p}-\frac{1}{7p}-\frac{1}{7p}\right)+\frac{1}{q}+\frac{1}{r}\Biggr)>n\left(\frac{1}{2p}+\frac{1}{q}+\frac{1}{r}\right)\,.\end{align}$$ First, note that $m\geq\tau\left(m^2\right)$. If $b>1$, then $$\frac{n}{q}= p^aq^{b-1}r^cm> (2a+1)(2b+1)(2c+1)\,\tau\left(m^2\right)\geq \tau\left(n^2\right)\,.$$ Similarly, if $c>1$, then $$\frac{n}{r}> \tau\left(n^2\right)\,.$$ If $a>1$ and $b=c=1$, then $$\frac{n}{2p}=\frac{1}{2}p^{a-1}qrm> 9(2a+1)\,\tau\left(m^2\right)=\tau\left(n^2\right)\,.$$ If $a=b=c=1$, then $$n\left(\frac{1}{q}+\frac{1}{r}\right)=p(q+r)m> 27\,\tau\left(m^2\right)=\tau\left(n^2\right)\,.$$ This proves that $n-\phi(n)>\tau\left(n^2\right)$ for any odd natural number $n$ with at least three distinct prime divisors. Now, we shall prove that $n=21$ and $n=25$ are the only solutions in $\mathbb{N}$ to $n=\phi(n)+\tau\left(n^2\right)$. It is clear that $n\neq 1$ and that $n$ is odd. From the paragraph above, $n$ has at most two distinct prime divisors. If $n$ has one prime divisor, say $n=p^a$, then the required condition gives $p^{a-1}=2a+1$, which leads to $p=5$ and $a=2$, whence $n=25$. If $n$ has two prime divisors, say $n=p^aq^b$ with $2
TITLE: Conjecture about primes and the factorial: for all primes $p>5$, must there exist a prime $q
5$ there exist a prime number $q
5$ there exist a prime $q
m$. And suppose
$n=sp^t$, where $p$ is the largest prime dividing $n$, $p\nmid s$ and $t>0$, then $0\equiv (pt)!\!\pmod n$. If $p$ is a large prime there are a lot of nonzero solutions to $x\equiv m!\!\pmod p$ and the probability for one of those solutions to be a prime increase with p.
REPLY [6 votes]: For $p=3$, $q$ has to be $2$.
Suppose that there exist $k,m\in\mathbb Z$ such that
$$3k+2=m!$$
Since $m\gt 2$, the RHS is divisible by $3$. This is a contradiction.
Added : Similarly, for $p=5$, there is no such prime $q$.<|endoftext|>
TITLE: The number of positive integer solutions to the equation $x_1+2x_2+...+nx_n=n^2.$
QUESTION [6 upvotes]: Let $n \ge 2, n \in \mathbb N$. $A_n$ denotes the number of positive integer solutions to the equation
$$x_1+2x_2+...+nx_n=n^2.$$
Prove inequality
$$\frac{n^n(n-1)^{n-1}}{2^{n-1}\left(n!\right)^2}\frac{1}{2^{n-1}n!}\,\binom{n^2-1}{n-1}\geq\frac{\big(n(n-1)\big)^{n-1}}{2^{n-1}n!(n-1)!}=\frac{n^n(n-1)^{n-1}}{2^{n-1}(n!)^2}\,.$$<|endoftext|>
TITLE: Subtracting expressions with radicals
QUESTION [11 upvotes]: I want to subtract the expressions $20\sqrt{72a^3b^4c} - 14\sqrt{8a^3b^4c}$. I simplified this to $120ab^2\sqrt{2ac}-28ab^2\sqrt{2ac}$. My textbook says the answer is $92ab^2\sqrt{2ac}$. Why doesnt the $ab^2\sqrt{2ac}$ part change at all? I thought everything except the 92 would cancel out since it looks like it's cancelling out. This is my first time using stackexchange, please tell me if I can ask this question better. Thanks.
REPLY [5 votes]: Although this question has several good answers and good points in the comments, I want to point out the explicit connection to the distributive law, in terms that can help in elementary school classrooms too.
Kids have little trouble answering the question
If I have 7 apples and get 2 more how many do I have then?
If you want to write the answer with more formal mathematics (nowadays called a number sentence) it's
7 apples + 2 apples = 9 apples .
This principle helps kids master place value: think "hundreds" instead of "apples" and you get $700 + 200 = 900$, even $900 + 200 = 1100$.
You can even touch on the etymology of "ninety" as coming from "nine tens".
In the OP's question the unit quantity of which there are first 128 and then only 92 is $ab^2\sqrt{2ac}$.<|endoftext|>
TITLE: Future learning for a math graduate in applied mathematics references
QUESTION [6 upvotes]: As a mathematics graduate with focus on programming we did a whole lot of coding of some mathematical statements (as well as proving them), but yet rarely giving real life examples and applications for given statements.
So for my future work i would like to learn some of the applications (mostly in electronics, programming, physics...) and get some references where i can continue to learn so that - given a problem i can correlate that to something i have learned before.
I am most interested in applications for these fields:
Numerical analysis
Interpolation (Hermite, multidimensional, Newton, Spline ...)
Approximation ( Least squares, uniform approximation,..)
Numerical methods for solving differential equations
Numerical methods for finding eigenvalues and eigenvectors
Numerical Methods for solving non-linear equations
...
Mathematical analysis
Integration by curve, surface
Integrals with parameters
Fourier series,Fourier transformation,Fourier integral
Uniform convergence (for sequence of functions, series, integrals)
Weierstrass function, Riemann zeta function
...
Measure theory
Lebesgue measure, Lebesgue integral
Radon - Nikodym theorem , derivate
Monotone convergence theorem, dominant convergence theorem
Lp spaces, norms
...
Complex analysis
Cauchy integral theorem
Picard's theorem
Laurent series ...
Holomorphic functions
...
References, books, websites - anything will do, just as long as there are multiple examples (also the more on the side of programming (algorithms, problem solving) and electronics the better)
REPLY [2 votes]: Probability theory is to me strictly pure mathematics, but with strong applicable properties. If you like measure theory, along with functional/modern analysis then probability theory is the area for you; The axioms of probability theory (Kolmogorov) are strictly measure theoretic. Then how one builds up expected value and so on is also integration theoretical. One integrates with respect to a probability measure, thus making the Lebesgue integral or equivalent necessary. I strongly encourage you to read the table of content in Kallenberg's Foundations of modern probability theory, to get a glimpse of what probability theory might entail, from the analytical stand point. It is sadly a bit advanced though, but I do not expect someone to read it all.
Probability theory is to some degree either very analytical or very combinatorial/graph theoretical, i.e. either you get hooked on stochastic analysis and SDE:s (along with stochastic integration and such), or you get hooked on percolation theory and Markov analysis, the latter having intimate connections with statistical mechanics (Ising's model for instance).
And as a last point, probability theory is computationally challenging, in so far as we need strong programmers as well. For instance, Monte Carlo methods are a very popular numerical method.
I hope you do not mind me diverging from your wish list, but I believe probability theory is worth a notice.<|endoftext|>
TITLE: Evaluate $\int \frac {\sin(x)}{x^2 + 4x + 5}dx$
QUESTION [7 upvotes]: Question:
Evaluate
$$ \int \frac{\sin(x)}{x^2 + 4x + 5} dx=\int \frac {\sin(x)}{(x + 2)^2 + 1}dx $$
By using the change of variable $y = x + 2$ we have that $dy = dx$ then
$$I = \int \frac{\sin(y - 2)}{y^2 + 1} dy$$
$f = \sin(y - 2)$, $f' = \cos(y - 2)$
$g' = \frac {1} {y^2 + 1}$, $g = \arctan(y)$
$I = \sin(y - 2) \cdot \arctan(y) + \int \cos(y - 2) \arctan(y) dy$
$I_1 = \int \cos(y - 2) \cdot \arctan(y) dy$
How can I solve?
REPLY [2 votes]: This answer is inspired by the tag used by the OP.
You may find the value of $$\int_{-\infty}^{\infty} \frac{\sin (x)}{(x+2)^2 + 1} dx$$ by using complex integration. Consider the function $f(z) = \frac{1}{(z +2)^2 +1}$ then $f(z) e^{iz}$ is analytic everywhere on and above the real axis except at the point $z = -2 + i$.
Let $C_R$ be the upper half of the the circle $|z| = R$, with $R > 2 $ from $z = -R$ to $z = R$.
Then integrating $f(z) e^{iz}$ yields
$$\int_{-R}^{R} \frac{e^{ix}}{(x+2)^2 + 1} dx = 2\pi i \,\,\mathrm{Res}_{z = -2 + i}\,\, [f(z)e^{iz}] - \int_{C_R} f(z)e^{iz} dz \tag{*}$$
where $\mathrm{Res}_{z = -2 + i}\,\, [f(z)e^{iz}] = \frac{e^{-1}(\cos 2 - i\sin 2)}{2i}$ and by showing that $\int_{C_R} f(z)e^{iz} dz \to 0$ as $R \to \infty$ (why?) thus we have that the imaginary part of $(*)$ is
$$\int_{-\infty}^{\infty} \frac{\sin (x)}{(x+2)^2 + 1} dx = \color{red}{-\frac{\pi\sin 2}{e}}$$
as $R \to \infty$.<|endoftext|>
TITLE: Is there a closed form for the integral $\int_0^1 x^n \log^m (1-x) \, {\rm d}x$?
QUESTION [5 upvotes]: Let $n \in \mathbb{N}$. We know that:
$$\int_0^1 x^n \log(1-x) \, {\rm d}x = - \frac{\mathcal{H}_{n+1}}{n+1}$$
Now, let $m , n \in \mathbb{N}$. What can we say about the integral
$$\int_0^1 x^n \log^m (1-x) \, {\rm d}x$$
For starters we know that $\displaystyle \log^m (1-x)=m! \sum_{k=m}^{\infty} (-1)^k \frac{s(k, m)}{k!} x^k$ where $s(k, m)$ are the Stirling numbers of first kind.
Thus
\begin{align*}
\int_{0}^{1} x^n \log^m (1-x) \, {\rm d}x &=m! \int_{0}^{1}x^n \sum_{k=m}^{\infty} (-1)^k \frac{s(k, m)}{k!} x^k \\
&= m! \sum_{k=m}^{\infty} (-1)^k \frac{s(k, m)}{m!} \int_{0}^{1}x^{n+m} \, {\rm d}x\\
&= m! \sum_{k=m}^{\infty} (-1)^k \frac{s(k, m)}{m!} \frac{1}{m+n+1}
\end{align*}
Can we simplify? I know that Striling numbers are related to the Harmonic number but I don't remember all identities.
REPLY [3 votes]: Another closed form follows by differentiating the beta function multiple times and applying the Faà di Bruno's formula.
Claim. For positive integers $m$ and $n$,
$$ \mathcal{J}_{n,m} := \int_0^1 x^{n-1}\log^m (1-x) \, \mathrm{d}x = (-1)^m \frac{m!}{n} \sum_{\alpha\in I_m} \prod_{k=1}^m \frac{1}{\alpha_k!} \bigg(\frac{H_n^{(k)}}{k}\bigg)^{\alpha_k} \tag{1} $$
where $\alpha$ runs over the set of indices
$$I_m = \{(\alpha_1,\cdots,\alpha_m)\in\Bbb{N}_0^m : 1\cdot\alpha_1+\cdots+m\cdot\alpha_m=m\}.$$
This formula gives an almost explicit formula for $\mathcal{J}_{n,m}$ in terms of polynomial of $H_n^{(1)}, \cdots, H_n^{(n)}$ at the expense of introducing certain combinatorial object, namely $I_m$.
Proof. Notice that
$$ \int_0^1 x^{n-1}(1-x)^s \, \mathrm{d}x = \frac{(n-1)!}{(s+1)\cdots(s+n)} = (n-1)!\exp\left(-\sum_{j=1}^n \log(s+j) \right). $$
Letting $f(s) = -\sum_{j=1}^n \log(s+j) $ and applying the Faà di Bruno's formula, we have
$$ \mathcal{J}_{n,m} = (n-1)!e^{f(0)} \sum_{\alpha \in I_m} m! \prod_{k=1}^{m} \frac{1}{\alpha_k !} \bigg( \frac{f^{(k)}(0)}{k!} \bigg)^{\alpha_k}. \tag{2}$$
Plugging $f(0) = -\log n!$ and
$$ f^{(k)}(0) = \sum_{j=1}^n (-1)^k (k-1)! (s+j)^{-k} \bigg|_{s=0} = (-1)^k (k-1)! H_n^{(k)} $$
into $\text{(2)}$ and simplifying the resulting expression yields $\text{(1)}$.<|endoftext|>
TITLE: Is there a quick way to justify that this elementary probability is equal to $\frac23$?
QUESTION [8 upvotes]: I just solved this problem with the conditional probability formula and after a while the answer was surprisingly $\frac23$.
I believe there must be a tricky short way to calculate it.
Can somebody help me?
There are $n$ urns of which the $r$th contains $r-1$ red balls and $n-r$ magenta balls. You pick an urn at random and remove two balls at random without replacement. Find the probability that: the second ball is magenta, given that the first is magenta.
REPLY [4 votes]: Imagine a row of $n$ urns with a single uncolored ball in each of them. One of the urns is selected at random, and its ball is colored white. The balls to the left of the white ball are colored red, and the balls to the right of the white ball are colored magenta. Now two more urns are selected at random, and their contents inspected. With probability ${1\over3}$ the white ball is the middle of the three, hence with probability ${2\over3}$ the contents of the other two urns are of the same color.<|endoftext|>
TITLE: How to prove that $\int_{0}^{1}\ln{(x/(1-x))}\ln{(1+x-x^2)}\frac{dx}{x}=-\frac{2}{5}\zeta{(3)}$
QUESTION [11 upvotes]: $$\int_{0}^{1}\ln{\big(\frac{x}{1-x}\big)}\ln{(1+x-x^2)}\frac{dx}{x}=-\frac{2}{5}\zeta{(3)}$$
Put $$\frac{x}{1-x}=y$$
$$I=\int_{0}^{\infty}\ln{y}\ln{(1+3y+y^2)}\frac{dy}{y(y+1)}=\frac{8}{5}\zeta{(3)}$$
Simple integral at first sight, however I cannot prove that. I would appreciate your help.
REPLY [11 votes]: Since $x\left(1-x\right)<1$ if $x\in\left(0,1\right)
$ we have $$\begin{align}
\int_{0}^{1}\frac{\log\left(\frac{x}{1-x}\right)\log\left(-x^{2}+x+1\right)}{x}dx & =\sum_{k\geq1}\frac{\left(-1\right)^{k+1}}{k}\int_{0}^{1}\log\left(x\right)x^{k-1}\left(1-x\right)^{k}dx \\
& -\sum_{k\geq1}\frac{\left(-1\right)^{k+1}}{k}\int_{0}^{1}\log\left(1-x\right)x^{k-1}\left(1-x\right)^{k}dx.
\end{align}
$$ Now by definition of Beta function we have $$
\int_{0}^{1}x^{a}\left(1-x\right)^{k}dx=B\left(a+1,k+1\right)
$$ and so $$\begin{align}
\int_{0}^{1}x^{k-1}\left(1-x\right)^{k}\log\left(x\right)dx = &\frac{\partial}{\partial a}\left(B\left(a+1,k+1\right)\right)_{a=k-1} \\
= & B\left(k,k+1\right)\left(\psi\left(k\right)-\psi\left(2k+1\right)\right)
\end{align}
$$ and in a similar way we have $$\begin{align}
\int_{0}^{1}x^{k-1}\left(1-x\right)^{k}\log\left(1-x\right)dx= & \frac{\partial}{\partial b}\left(B\left(k,b\right)\right)_{b=k+1} \\
= & B\left(k,k+1\right)\left(\psi\left(k+1\right)-\psi\left(2k+1\right)\right)
\end{align}
$$ then $$\begin{align}
\int_{0}^{1}\frac{\log\left(\frac{x}{1-x}\right)\log\left(-x^{2}+x+1\right)}{x}dx= & -\sum_{k\geq1}\frac{\left(-1\right)^{k+1}}{k}B\left(k,k+1\right)\left(\psi\left(k+1\right)-\psi\left(k\right)\right) \\
= & -\sum_{k\geq1}\frac{\left(-1\right)^{k+1}}{k^{2}}B\left(k,k+1\right) \\ = & -\sum_{k\geq1}\frac{\left(-1\right)^{k+1}}{k^{3}\dbinom{2k}{k}}
\end{align}
$$ and the last series has a well known closed form $$\sum_{k\geq1}\frac{\left(-1\right)^{k+1}}{k^{3}\dbinom{2k}{k}}=\frac{2}{5}\zeta\left(3\right).\tag{1}$$
For the other integral note that $$I=\int_{0}^{\infty}\frac{\log\left(y\right)\log\left(1+3y+y^{2}\right)}{y\left(y+1\right)}dy\overset{y=\frac{x}{1-x}}{=}\int_{0}^{1}\frac{\log\left(\frac{x}{1-x}\right)\log\left(\frac{1+x-x^{2}}{\left(1-x\right)^{2}}\right)}{x}dx
$$ $$=\int_{0}^{1}\frac{\log\left(\frac{x}{1-x}\right)\log\left(1+x-x^{2}\right)}{x}dx-2\int_{0}^{1}\frac{\log\left(\frac{x}{1-x}\right)\log\left(1-x\right)}{x}dx
$$ and it is sufficient to observe that $$-2\int_{0}^{1}\frac{\log\left(x\right)\log\left(1-x\right)}{x}dx=2\sum_{k\geq1}\frac{1}{k}\int_{0}^{1}\log\left(x\right)x^{k-1}dx
$$ $$=-2\sum_{k\geq1}\frac{1}{k^{3}}=-2\zeta\left(3\right)
$$ and $$ 2\int_{0}^{1}\frac{\log^{2}\left(1-x\right)}{x}dx=2\int_{0}^{1}\frac{\log^{2}\left(x\right)}{1-x}dx=2\sum_{k\geq0}\int_{0}^{1}\log^{2}\left(x\right)x^{k}dx
$$ $$=4\sum_{k\geq1}\frac{1}{k^{3}}=4\zeta\left(3\right)
$$ so using the previous result we have $$I=\zeta\left(3\right)\left(2-\frac{2}{5}\right)=\frac{8}{5}\zeta\left(3\right).
$$<|endoftext|>
TITLE: What is a short exact sequence?
QUESTION [9 upvotes]: I'll just quote my book here so you can see the definitions I have:
Suppose that you are given a sequence of vector spaces $V_i$ and linear maps $\varphi_i: V_i\to V_{i+1}$ connecting them, as illustrated below: $$\cdots \longrightarrow V_{i-1} \stackrel{\varphi_{i-1}}{\longrightarrow} V_i \stackrel{\varphi_{i}}{\longrightarrow} V_{i+1} \stackrel{\varphi_{i+1}}{\longrightarrow} \cdots$$ The maps are said to be exact at V_i if $\operatorname{im} \varphi_{i-1} = \operatorname{ker}\varphi_i$. The sequence is called an exact sequence if the maps are exact at $V_i$ for all $i$. $\dots$ If $V_1, V_2$ and $V_3$ are three vector spaces, and if the sequence $$0 \stackrel{\varphi_0}{\longrightarrow} V_{1} \stackrel{\varphi_{1}}{\longrightarrow} V_2 \stackrel{\varphi_{2}}{\longrightarrow} V_{3} \stackrel{\varphi_{3}}{\longrightarrow} 0 \tag{1.7}$$ is exact, it is called a short exact sequence. In this diagram "$0$" represents the zero-dimensional vector space.
OK, here's what I'm not understanding. If the image of any function in this sequence is the kernel of the next function, doesn't every step of this just map to $0$? And even if it didn't, because we're starting with the $0$ vector space, everything has to map to $0$ because linear transformations always map $0$ to $0$. So I'm not understanding this definition at all. The first exercise right below these definitions is to show that equation $(1.7)$ implies that $\varphi_1$ is injective and $\varphi_2$ is surjective. But all I'm seeing here is a chain of functions mapping zero to zero. Can someone explain what I'm missing here?
REPLY [15 votes]: I'll answer with the most important example:
If $T:V \to W$ is a linear transformation between vector spaces, then
$0 \to \text{Nullspace}(T) \to V \to \text{Range}(T) \to 0$
is a short exact sequence, where the map Nullspace$(T) \to V$ is the inclusion, and the map $V \to \text{Range}(T)$ is just $v \mapsto Tv$.
Prove that this is exact.
Once you do, you'll have a whole family of short exact sequences which aren't trivial. Also, all short exact sequences are "isomorphic" to this one for some $T$.
Not really part of the answer, but an important fact about exact sequences (of finite-dimensional vector spaces, and only finitely many of them) is that if you take the alternating sum of dimensions (add all dimensions, but give odd-numbered terms a minus sign), you get zero. In the above example this is equivalent to a familiar fact from linear algebra. What is that fact?<|endoftext|>
TITLE: Why are an even number of flips required to get back to the original list?
QUESTION [9 upvotes]: Consider the list of numbers $[1, \cdots, n]$ for some positive integer $n$. Two distinct elements $i$ and $j$ of the list can be switched in a so-called flip. For example, let $f$ be a flip that switches $2$ and $4$. Then $f([1,2,3,4]) = [1,4,3,2]$. Now consider a sequence of $k$ flips $f_1, \cdots, f_k$ of the list $[1, \cdots, n]$ such that $f_1(f_2(\cdots f_k([1,\cdots,n])\cdots)) = [1,\cdots, n]$, i.e. performing all flips gives the original list. Then $k$ must be even.
I would like to find a proof of this proposition that is elementary as possible. I already came up with a justification using permutation groups which goes as follows:
Each flip $f_i$ corresponds to a transposition of the list $[1, \cdots, n]$. Since the composition $f_1f_2\cdots f_k$ results in the identity, it must be an even permutation. Thus any representation of $f_1f_2\cdots f_k$ as a product of transpositions must contain an even number of transpositions. In particular, since each $f_i$ is a transposition, it follows that $k$ must be even.
This "proof" trivializes the problem statement as it is by using relatively high-powered facts about permutation groups. Is there a lower-level proof that avoids using theses results? (Ideally such a proof would avoid permutation groups altogether and be understandable to the layman.)
Edit: to clarify (since this question hasn't been getting as much attention as I had hoped), any proof that avoids re-deriving these powerful results about permutations would suffice. Basically I would want a proof that does not prove much more than the question requires, i.e. doesn't have a part where it says "in particular".
REPLY [3 votes]: Let $\ell=(x_1,x_2,\ldots, x_n)$ be a list of the numbers in $[n]$. For each $k\in[n]$ denote by $p_k$ the number of $x_i\ (1\leq ix_k\,$, and call $\beta(\ell):=\sum_{k=1}^n p_k$ the badness of $\ell$. It is easy to convince oneself that a transposition ("flip") $\tau:\>\ell\mapsto\ell'$ changes the badness by $\pm1$. It follows that an odd number of flips cannot restore a given $\ell$ to its original version.<|endoftext|>
TITLE: Why is $\wedge$ a minimum and $\vee$ a maximum?
QUESTION [18 upvotes]: Why does $\wedge$ denote a minimum and $\vee$ a maximum? Where did this notation come from? I keep getting them mixed up because to me, $\wedge$ should be a maximum: it's a hill, or a curve reaching its maximum. Similarly, $\vee$ is a gulf, or a curve reaching its minimum, so it should be minimum. The way I am currently memorizing these notations is actually by using this hill/gulf analogy first, and then quickly reminding myself that it is the opposite of that.
Who decided that it should be this way?
REPLY [16 votes]: Where did this notation come from?
In lattice theory we have join and meet [see: Helena Rasiowa & Roman Sikorski, The Mathematics of Metamathematics (1963), page 34] :
the least upper bound of $a, b \in A$ will be denoted by $a \cup b$ and called the join of elements $a, b$, and the greatest lower bound of $a, b \in A$ will be denoted by $a \cap b$ and called the meet of $a, b$.
The symbols are motivated by the algebra of sets: the symbols $\cap$ and $\cup$ for intersection and union were used by Giuseppe Peano (1858-1932) in 1888 in Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassmann.
In propositional calculus we have $\lor$ for disjunction, introduced by Russell in manuscripts from 1902-1903 and in 1906 in Russell's paper "The Theory of Implication," in American Journal of Mathematics, vol. 28.
And we have $\land$ for conjunction: first used in 1930 by Arend Heyting in “Die formalen Regeln der intuitionistischen Logik,” Sitzungsberichte der preußischen Akademie der Wissenschaften, phys.-math. Klasse, 1930.
The link is with boolean algebra and its use as interpretation for the propositional calculus:
a Boolean algebra is a non-empty set $A$, together with two binary operations $∧$ and $∨$ (on $A$), a unary operation $'$, and two distinguished elements $0$ and $1$, satisfying the following axioms [...]. There are several possible widely adopted names for the operations $∧, ∨$, and $'$. We shall call them meet, join, and complement (or complementation), respectively. The distinguished elements $0$ and $1$ are called zero and one.
We can then define a binary relation $\le$ in every Boolean algebra; we write
$p \le q$ in case $p ∧ q = p$, and we have that:
For each $p$ and $q$, the set $\{ p, q \}$ has the supremum $p ∨ q$ and the
infimum $p ∧ q$.<|endoftext|>
TITLE: Finding the method of moments estimator for the Uniform Distribution
QUESTION [8 upvotes]: Let $X_1, \ldots, X_n \sim \text{Uniform}(a,b)$ where $a$ and $b$ are unknown paramaters and $a < b$.
(a) Find the method of moments estimators for $a$ and $b$.
(b) Find the MLE $\hat{a}$ and $\hat{b}$.
For part (b), consider that
$$
f(x) = \begin{cases} 0 & \text{ if } x \notin [a,b] \\
1/(b-a) & \text{ if } x \in [a,b] \\
\end{cases}
$$
Thus, the MLE estimate will be $(\min \{X_1, \ldots, X_n \}$, $\max \{X_1, \ldots, X_n \})$.
But what about part (a)?
REPLY [8 votes]: The first moment is
$$
\int_a^b x f(x)\,dx = \int_a^b \frac{x\,dx}{b-a} = \frac 1 2 \cdot \frac{b^2-a^2}{b-a} = \frac{b+a} 2.
$$
The second moment is
$$
\int_a^b x^2 f(x) \,dx = \int_a^b \frac{x^2\,dx}{b-a} = \frac 1 3 \cdot \frac{b^3 -a^3}{b-a} = \frac{b^2+ba+a^2} 3.
$$
So equate the sample moments with the population moments found above:
\begin{align}
& \frac{x_1+\cdots+x_n} n = \overline x = \frac{b+a} 2 \tag 1 \\[10pt]
& \frac{x_1^2+\cdots+x_n^2} n = \frac{b^2+ba+a^2} 3 \tag 2
\end{align}
It's routine to solve $(1)$ for $b$. Plug that expression into $(2)$ wherever you see $b$. You get a quadratic equation in $a$. Solving a quadratic equation can be done by a known algorithm. You get two solutions. The estimate of $a$ will be the smaller of the two (Exercise: Figure out why it's the smaller one).
A bit of algebra that may be useful in simplifying the answer is this:
$$
\frac{x_1^2+\cdots+x_n^2} n - \left(\frac{x_1+\cdots+x_n} n\right)^2 = \frac{(x_1-\bar x)^2 + \cdots + (x_n-\bar x)^2} n \text{ with } \bar x \text{ as above.}
$$
An alternative approach is to let $m$ be the midpoint of the interval $[a,b]$ and let $c$ be the half-length of the interval, so that the interval is $[m-c, m+c]$. Then you'd have
\begin{align}
& \frac{x_1+\cdots+x_n} n = m, \\[10pt]
& \frac{x_1^2+\cdots+x_n^2} n = m^2 + \frac{c^2} 3.
\end{align}
It's easy to solve that for $m$ and $c$, and above you're given $a$ and $b$ as functions of $m$ and $c$.
Note: The method-of-moments estimators plainly omit some relevant information in the data. The MLEs do not. I won't be surprised if there are some sequences $x_1,\ldots,x_n$ for which the method-of-moments estimator of $b$ is smaller than $\max\{x_1,\ldots,x_n\}$, and if so, then a similar problem would aflict the estimator of $a$ in a data set that can easily be constructed from that one. Maybe both pathologies could occur simultaneously.<|endoftext|>
TITLE: Prove inequality $\sqrt{\frac{1}n}-\sqrt{\frac{2}n}+\sqrt{\frac{3}n}-\cdots+\sqrt{\frac{4n-3}n}-\sqrt{\frac{4n-2}n}+\sqrt{\frac{4n-1}n}>1$
QUESTION [7 upvotes]: For any $n\ge2, n \in \mathbb N$ prove that
$$\sqrt{\frac{1}n}-\sqrt{\frac{2}n}+\sqrt{\frac{3}n}-\cdots+\sqrt{\frac{4n-3}n}-\sqrt{\frac{4n-2}n}+\sqrt{\frac{4n-1}n}>1$$
My work so far:
1) $$\sqrt{n+1}-\sqrt{n}>\frac1{2\sqrt{n+0.5}}$$
2) $$\sqrt{n+1}-\sqrt{n}<\frac1{2\sqrt{n+0.375}}$$
REPLY [3 votes]: Another proof is this:
Note that
$$
2 = \sqrt{\frac{4n}{n}} = \frac{1}{\sqrt{n}}\sum_{j=0}^{4n-1}\sqrt{j+1}-\sqrt{j}
$$
where the RHS can be expressed as
$$
\frac{1}{\sqrt{n}}\left(\sum_{j=1}^{2n}(\sqrt{2j}-\sqrt{2j-1})+\sum_{j=1}^{2n-1}(\sqrt{2j-1}-\sqrt{2j-2})\right)
$$
Using that the function $f:(0,+\infty)\to \mathbb{R}$ given by $f(x)=\sqrt{x+1}-\sqrt{x}$ is strictly decreasing, we deduce, for all $j\in \{1,...,2n\}$,
$$
\sqrt{2j-1}-\sqrt{2j-2}>\sqrt{2j}-\sqrt{2j-1}
$$
hence
$$
2\sum_{j=1}^{2n}\left(\sqrt{\frac{2j-1}{n}}-\sqrt{\frac{2j-2}{n}}\right)>2
$$<|endoftext|>
TITLE: Left adjoint for the "strings category" functor
QUESTION [5 upvotes]: Let $\mathbf{Cat}$ be the category of small categories and let $\mathbf{sCat}$ denote the category of simplicial objects in $\mathbf{Cat}$. We have a functor
$$
\text{str}\colon \mathbf{Cat}\longrightarrow \mathbf{sCat}
$$
taking a category $\mathcal{C}$ to the simplicial object whose category of $n$-simplices is given by $\mathcal{C}^{[n]}$ and whose faces and degeneracies are given by precomposition with the cofaces and the codegeneracies in the simplex category.
Question: is there a left adjoint for this functor? If so, how can it be described explicitly?
EDIT: by looking at the simplex category as a discrete 2-category, this may actually be an instance of the enriched version of the fact that presheaf categories are the free cocompletion of ordinary (small) categories. Still, I'd like to have an explicit description of the left adjoint, which I can't immediately see using enriched Kan extensions.
REPLY [3 votes]: This analogous to the adjuntion between the singular simplicial set functor and geometric realization, or to the adjunction between the nerve functor and the fundamental category functor.
Let $I : \Delta \to \mathrm{Cat}$ be the "inclusion" that sends $[n]$ to the category also called $[n]$ that you mentioned in your description of $\mathrm{str}$. Then $\mathrm{str}(\mathcal{C}) = \mathrm{Fun}(I(-), \mathcal{C})$. The left adjoint is then given by $- \otimes I$, the enriched functor tensor product with $I$, computed by an enriched coend $\mathcal{X} \mapsto \int^{[n] \in \Delta} \mathcal{X}_n \times [n]$. (Here, $\mathcal{X}$ is a simplicial object in categories, $\mathcal{X}_n$ is its category of $n$-simplices, and the $[n]$ that $\mathcal{X}_n$ is multiplied by is really the category $I([n])$.)
See the nLab article Nerve and Realization for the general construction covering all three cases I mentioned here, and for a proof of the adjunction.<|endoftext|>
TITLE: Integration by parts or substitution?
QUESTION [5 upvotes]: $$\int_{}^{}x e^x \mathrm dx$$
One of my friends said substitution , but I can't seem to get it to work.
Otherwise I also tried integration by parts but I'm not getting the same answer as wolfram.
The space in the question seems like it shouldn't take more than 2 lines though. Am I missing something?
Thanks to all the answers below , I messed up in the original question it was actually
$$\int_{}^{}x e^{x^2} \mathrm dx$$
With help from the below answers I did the following:
Let $u = x^2$ , then $du=2x\mathrm dx$
So rewriting the integral
$$\int_{}^{}{{x\cdot e^u} {1 \over 2x}} \mathrm dx$$
Simplifying yields:
$${1 \over 2x}\int_{}^{}{e^u}\mathrm dx$$
Which in turn yields:
$${\frac{e^u}{2}} + C$$
The rest is fairly obvious!
REPLY [5 votes]: Yes you can solve it by substitution (which is not trivial in this case) but you can choose:
$$u = e^x(x-1) \rightarrow du=xe^xdx\rightarrow dx=\dfrac{du}{xe^x}$$
Replacing in the integral, you get:
$$\int{xe^x \mathrm dx}= \int{du=u+C=e^x(x-1)}+C$$
NOTE the easiest way to solve the integrals in this form ($P_1(x)e^x$) is by using integration by parts<|endoftext|>
TITLE: Expected number of tosses to get 3 consecutive Heads
QUESTION [9 upvotes]: I have a fair coin. What is the expected number of tosses to get three Heads in a row?
I have looked at similar past questions such as Expected Number of Coin Tosses to Get Five Consecutive Heads but I find the proof there is at the intuitive, not at the rigorous level there: the use of the "recursive" element is not justified. The Expectation $\mathbb E[X]$ is a number, not a random variable, as it is treated there. Please make this clear.
REPLY [15 votes]: Although the question has already been answered, I would like to offer a very similar solution but a different approach mindset to it.
It is a crude image but it essentially explains the answer above very beautifully.
At the beginning, we have no coins tossed so we have no consecutive heads. Next, we toss an $H$ or $T$ with probability $\frac{1}{2}$. Thus, we go the next state $2$ with probability $\frac{1}{2}$, similarly with $3$ and $4$.
Let $g(x)$ be the expected time until we reach state $4$, $HHH$, from state $x\in\{1,2,3,4\}$. Obviously $g(4)=0$ since we are already at state $4$!
\begin{align}
g(1)&=\frac{1}{2}(g(2)+g(1))+1\\
g(2)&=\frac{1}{2}(g(3)+g(1))+1\\
g(3)&=\frac{1}{2}(g(4)+g(1))+1\\
\end{align}
Since whenever we move from one state to another we take or "waste" one step. However, we only take a step in the correct direction with probability $\frac{1}{2}$. Otherwise, we have to go back to state $1$ and need to find $g(1)$.
Thus, by substitution (or recursion),
$$g(1)=1+\frac{1}{2}g(1)+\frac{1}{2}\left(\frac{1}{2}g(3)+\frac{1}{2}g(1)+1\right)=1+\frac{1}{2}g(1)+\frac{1}{4}g(1)+\frac{1}{2}+\frac{1}{4}\left(\frac{1}{2}g(1)+1\right)=1+\frac{1}{2}+\frac{1}{4}+g(1)\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{8}\right)$$
$$g(1)=14$$<|endoftext|>
TITLE: difference between second order quasi linear and semi linear PDE
QUESTION [5 upvotes]: I am studying the second order PDE's and I am a bit confused with classification of quasi linear and semi linear PDEs.
Could anybody explain on examples what is a difference between them please?
REPLY [15 votes]: Let $L$ be a $k^{th}$-order linear differential operator, i.e. one which satisfies $L(\alpha u + \beta v) = \alpha L u + \beta Lv$ for all $u,v\in C^k$ and constants $\alpha,\beta$ (of course this notion can be weakened past $C^k$, but this will do for here). We say that the equation $L u =f$, for some given function $f$, is linear.
A semilinear equation is one of the form $L u = f(x,u,Du, \dotsc, D^{k-1} u)$, where $D^j j$ denotes all $j^{th}$ order derivatives. The right hand side is a generic nonlinearity that involves any possible combination of derivatives up to one order less than the order of the main linear operator. This is the key feature of semilinear equations: they are linear "at the highest order" and possibly nonlinear at lower order.
To describe a quasilinear equation we need to be more careful with naming $L$. Let's say it's of the form
$$
L = \sum_{|\alpha| \le k} a_\alpha \partial_\alpha.
$$
In the above treatment we have that $a_\alpha = a_\alpha(x)$ in order for the operator $L$ to be linear. Now for a quasilinear equation we allow the $a_\alpha$ coefficients to depend on the solution itself, but only up to $k-1$ order derivatives. That is, a quasilinear problem is one of the form
$$
\sum_{|\alpha| \le k} a_\alpha(x,u,Du,\dotsc,D^{k-1}u) \partial_\alpha u = f(x,u,Du,\dotsc,D^{k-1}u).
$$
The key feature here is that the coefficients can only depend on the lower order derivatives, so in some sense if we think of "freezing" the lower order derivatives, then the resulting problem is actually a linear one.
In some sense the main reason for making these distinctions lies in the tools available to solve such problems. Roughly speaking, linear problems are the easiest. Semilinear ones are next, and one often views a semilinear problem as a "small nonlinear perturbation" of a linear one. Quasilinear problems are next in the hierarchy; the construction of solutions is often built on the linear theory but in a more complicated way than for semilinear problems. After quasilinear comes fully nonlinear, which essentially means that the nonlinearity occurs at the highest order of differentiability.
EDIT:
Some first-order examples.
A linear equation:
$$
\partial_1 u + \partial_2 u + u = 0.
$$
A semilinear equation:
$$
\partial_1 u + \partial_2 u + u = \cos(u).
$$
A quaslinear equation:
$$
u\partial_1 u + u^2 \partial_2 u + u = e^u.
$$
A fully nonlinear equation:
$$
(\partial_1 u)^2 + (\partial_2 u)^2 = 1.
$$
Some second-order examples.
A linear equation:
$$
\partial_1 \partial_1 u + \partial_2 \partial_2 u + u = 0.
$$
A semilinear equation:
$$
\partial_1 \partial_1 u + \partial_2 \partial_2 u + u = \cos(u).
$$
A quaslinear equation:
$$
u\partial_1 \partial_1 u + u^2 \partial_2 \partial_2 u + u = e^u.
$$
A fully nonlinear equation:
$$
(\partial_1 \partial_1 u)^2 + (\partial_2 \partial_2 u)^2 = 1.
$$<|endoftext|>
TITLE: Is there a measure space $(X,\mathcal M, m)$ such that $\{m(E) \mid E \in \mathcal M\} = \Bbb Q_{\geq 0} \cup \{+\infty\}$?
QUESTION [8 upvotes]: I have in mind the following question:
Is there a measure space $(X,\mathcal M, m)$ such that the range of $m$ satisfies $S:=\{m(E) \mid E \in \mathcal M\} = \Bbb Q_{\geq 0} \cup \{+\infty\}$?
(I would also accept a space where $\Bbb Q_{\geq 0} \cup \{+\infty\}$ is replaced by $\Bbb Q_{\geq 0}\,$.)
An idea would be to take $X=\Bbb N$ and define $m(\{n\}):=r_n$ the $n$-th positive rational number. But then $m(\{k(n) \in X \mid r_n=1/n^2, n\geq 0\})=\pi^2/6$ is not rational. So the measurable sets corresponding to $1/n^2$ shouldn't be disjoint.
To avoid this, we could demand that some fixed element $x_0$ belongs to every non-empty measurable set. But this is not possible since $\mathcal M$ is a $\sigma$-algebra, in particular it is closed under taking complements.
Similarly, if $(x_n)$ is any sequence of positive rational numbers that converges to $\sqrt 2$, the measurable sets corresponding to $x_n$ shouldn't included one in another (to avoid a chain).
I could replace $\pi^2/6$ and $\sqrt 2$ by any positive real number (since $\Bbb Q$ is dense in the reals)!
Therefore, my intuition is that such a measure space can't exist. Actually, I believe that the set $S$ defined above should be closed in $\Bbb R \cup \{+\infty\} \cong S^1$ (and even "closed under taking series" with elements in $S$). But I'm unsure if this is true, and how to prove it.
Any comment would be appreciated!
REPLY [5 votes]: The answer to your question is no, and this is not hard. But first, for the record we should point out that one of your conjectures about this is false:
The range of a measure need not be closed.
In fact, although the answer to your question is no, there is a measure with range equal to $$\{0,\infty\}\cup(\Bbb Q\cap[1,\infty)).$$ Say $(r_1,r_2,\dots)$ is an enumeration of the rationals greater than or equal to $1$, and define a measure on $\Bbb N$ by $$\mu(\{n\})=r_n.$$Then every $r_n$ is in the range of $\mu$. If $E\subset\Bbb N$ is finite and nonempty then $\mu(E)$ is a rational greater than or equal to $1$, while if $E$ is infinite then $\mu(E)=\infty$.
So that's interesting. But the range of a measure cannot be all the non-negative rationals plus $\infty$. For example:
Theorem If $\mu$ is a measure such that for every $\delta>0$ there exists $E$ with $0<\mu(E)<\delta$ then the range of $\mu$ is uncountable.
Proof: Choose sets $F_n$ with $\mu(F_n)>0$ and $$\mu(F_{n+1})<\mu(F_n)/10.$$Let $$E_n=F_n\setminus\bigcup_{k=n+1}^{\infty}F_k\quad(n\ge2).$$Then the $E_n$ are disjoint, $\mu(E_n)>0$ and $$\mu(E_{n+1})<\mu(E_n)/3.$$For $A\subset\Bbb N$ let $$S_A=\bigcup_{n\in A}E_n.$$Then $$\mu(S_A)=\sum_{n\in A}\mu(E_n),$$and the fact that $\mu(E_{n+1})<\mu(E_n)/3$ shows that those sums are all distinct (that is, $\sum_A\ne\sum_B$ if $A\ne B$).<|endoftext|>
TITLE: Demystifying the tensor product
QUESTION [5 upvotes]: It seems to me, through my mathematical immaturity, that the tensor product seems to beg for more well-definition. I am working in vector spaces (so we always have a free module) and here is what my professor has shown me thus far.
We can define the tensor product of two maps (multi-linear) as follows. Let $S \in \mathcal{L}(V_1, \dots, V_n; \mathcal{L}(W;,Z))$ and $T \in \mathcal{L}(V_{n+1}, \dots , V_{n+m};W)$, We define $S \otimes T \in \mathcal{L}(V_1, \dots , V_{n+m};Z)$ by setting
$$S \otimes T(v_1, \dots ,v_{n+m})=S(v_1, \dots, v_n)[T(v_{n+1}, \dots , v_{n+m})]$$
Now, we do have $\mathcal{L}(V_1, \dots , V_{n+m};Z) \cong V^*_1 \otimes \dots \otimes V^*_{n+m} \otimes Z$ I believe. So it is, up to isomorphism, a tensor but not, itself, a tensor.
Further, suppose that $V_1, \dots , V_n$ are vector spaces. We define the tensor product
$$V_1 \otimes \dots \otimes V_n = \mathcal{L}(V^*_1, \dots V^*_n; \mathbb{F})$$
Since we regard $V$ and $V^{**}$ to be identified we have
$$v_1 \otimes \dots \otimes v_n \in V_1 \otimes \dots \otimes V_n$$
defined
$$(v_1 \otimes \dots \otimes v_n)(L_1, \dots L_n)=L_1(v_1)\dots L_n(v_n)$$
Finally, we have defined a tensor of type $m,n$ to be a multi-linear map from $\underbrace{V^* \times \dots \times V^*}_{m \text{ times}}\times \underbrace{V \times \dots \times V}_{n \text{ times}} \to \mathbb{F}$.
problem
So it seems to me that tensor products do not always produce tensors? That a tensor product sometimes is and sometimes is not a map to the field? Which makes me wonder how we can consider the idea to be well-defined? I have to be told by some to think about it in terms of the universal property, i.e., it takes multi-linear maps to linear ones but that isn't as illuminating as some may think. How is one to think about this product and these objects? Thanks for your help!
REPLY [3 votes]: My confusion, now all cleared up years later, was one of notation. When we are being slightly less precise we can define the tensor product of maps say from
$$f \otimes g: V_1 \otimes V_2 \to V'_1 \otimes V'_2$$
which is defined by $f$ acting on the first coordinate and $g$ on the second. For all those wondering, this is not a tensor. This is (slightly lazy) notation demonstrating how we get certain maps once we have taken the tensor product of vector spaces (modules, more generally). $f \otimes g$ is not a tensor but acts on them.
There is a reason we abuse notation like this as there is a correspondence of sorts between this ``tensor product of maps'' and tensor products between spaces of maps.
So no, the tensor product always gives us tensors and we abuse this notation to say what happens with maps. Hope this helps anyway who has a similar confusion.<|endoftext|>
TITLE: Why is the axiom of choice controversial?
QUESTION [7 upvotes]: In other words, what are the arguments for ZF over ZFC, and what philosophical issues have people raised against including it as a standard axiom of set theory?
REPLY [2 votes]: This axiom is controversial because although it seems like a relatively intuitive idea, there are still some issues. One of the best ways of understanding it is in this way: Take the set of all pairs of shoes and find a way to pick one shoe from each pair in order to form a new set. Easy! Just let our choice function be take every right shoe from each pair and you get the set of all right shoes. Consider this now: Take the set of all pairs of socks. The axiom of choice tells us there is a way to pick one sock from each pair to form a new set, but how you make that choice is not easy to describe. So although the axiom says a choice function always exists, what that choice function can be or how it is defined is not always apparent or even impossible to define in many cases.
Some more things: The axiom of choice is indeed an extremely useful axiom in many areas of math. However, it gives rise to the Banach-Tarski Paradox and the existence of nonmeasurable subsets of the real numbers. Also, the axiom of choice is equivalent to the statement that any set can be well-ordered, i.e., every nonempty set can be endowed with a total order such that every nonempty subset has a least element. Therefore, this means that $\mathbb{R}$ can be well-ordered. However, no one has ever been able to explicitly state how one does this, and if I'm not mistaken, it may be impossible to do so with the current axioms we have available. I also want to add that one person in the comments section mentioned, we need AC in order to talk about cardinalities of infinite sets.<|endoftext|>
TITLE: Creating a tight frame of $\mathbb{R}^{n}$ when already knowing some of its vectors.
QUESTION [5 upvotes]: I'm wondering whether or not there's an optimal way for adding rows to a given matrix $S\in\mathbb{R}^{m\times mn}$, $m\leq n$, so that the columns of the resulting matrix form an orthogonal system of vectors of equal norm.
I've been struggling with this problem for quite some time now and the reason is that I don't want to change my starting matrix $S$.
This is equivalent to creating a tight frame of $\mathbb{R}^{mn+v}$, $v\geq 0$ when you already know some of its vectors. Up to this point, I have succeeded in creating a tight frame given any number of its vectors but I don't know how to minimize $v$.
REPLY [2 votes]: If the number $v$ of additional rows is not prespecified but chosen at will, it is always possible to append a square matrix to $S$ to form an $m(n+1)\times mn$ matrix $A$ that has mutually orthogonal columns of equal norms, although I am not sure if this is qualified as "optimal". Anyway, the construction is conceptually easy. Let $S=U(\Sigma,0)V^\top$ be a singular value decomposition. Take
$$
T = \pmatrix{\sqrt{\sigma_1^2I_m-\Sigma^2}\\ &\sigma_1I_{m(n-1)}}V^\top,
\quad A=\pmatrix{S\\ T}.
$$
Note that $\sigma_1I_m-\Sigma\succeq0$ has a real square root because it is positive semidefinite. Now
$$
A^\top A=S^\top S+T^\top T=V\pmatrix{\Sigma^2\\ &0}V^\top
+V\pmatrix{\sigma_1^2I_m-\Sigma^2\\ &\sigma_1^2I_{m(n-1)}}V^\top=\sigma_1^2I_{mn}.
$$
Hence the columns of $A$ are mutually orthogonal and each column has norm $\sigma_1$.
If $m+v$ is required to be equal to $mn$, i.e. if the completed matrix $A$ is required to be square, clearly the completion is possible if and only if $SS^T=\alpha I_m$ for some $\alpha>0$. If this condition is satisfied, you may simply fill in the other rows by Gram-Schmidt process.<|endoftext|>
TITLE: Are minimizing a function and root finding the same?
QUESTION [6 upvotes]: What is the relationship between minimizing a function and finding a root of an equation? Are the the same? I know in both problem we have similar algorithms, such as gradient decent, or newton's methods.
For example, Let use assume $x$ is a scalar. Finding a root for an equation $f(x)=b$ is checking where the function $f(x)-b$ cross the x axis. This is definitely not equal to minimize $f(x)-b$.
But in the convex optimization book, Minimizing $\|Ax-b\|_2^2$ is equal to the solution of the linear system $Ax=b$.
What I am missing? in which case we can transfer optimization to root finding?
REPLY [7 votes]: Root-finding and optimization are not the same, but a root-finding problem can be reformulated into an optimization problem. That is, we can construct an optimization problem from a root-finding problem where the solution to both problems are the same.
The value(s) of $x$ that satisfy $f(x) = b \implies f(x)-b = 0$ will also be the global minimizer(s) of the optimizition problem min $||f(x)-b||^2_2$.
The idea is that the $x^*$ that makes $f(x^*) = b$ also makes $f(x^*)-b = 0$ and will also be the global minimizer for $||f(x^*)-b||^2_2$ since the lowest value it can acheive is $0$.
*If the function $f$ and its domain have the right mathematical properties than you can convert it to a optimization problem. Things like differentiability of $f$.<|endoftext|>
TITLE: Number Theory and d-Self-Contained Numbers
QUESTION [7 upvotes]: Given any natural number $N = a_{n}a_{n-1}\ldots a_{1}$, let us associate to it the set $S_{N} = \bigcup_{j=1}^{n}\{(a_{j},j)\}$. We're going to define a d-self-contained number as any natural number which satisfies the rule:
\begin{align*}
&\forall k\leq n,\exists\sigma\subset S_{N}\setminus\{(a_{k},k)\}; a_{k} = \sum_{s\in\sigma}(-1)^{e_{s}}\|p_{1}(s)\|^{d},\\
&\text{where}\,\,n\geq 3,\,\,e_{s}\in\{0,1\}\,\,\text{and}\,\,\#\sigma\geq 2
\end{align*}
In other words, a number is d-self-contained if each of its digits can be obtained from the others as a linear combination of their d-th power (where d is natural and fixed) whose coefficients belong to the set {-1,0,1}, where at least two of these coefficients are non-zero terms. It is worth saying that $p_{1}$ is the projection mapping on the first coordinate. Let us work it through examples.
The number 101 is 2-self-contained since we can rewrite its digits as: $1 = 0^{2} + 1^{2}$ and $0 = 1^{2} - 1^{2}$. Furthermore, the number 101 is d-self-contained for every d. Indeed, we have:
$$1 = 0 + 1 = 0^{d} + 1^{d}\,\,\text{and}\,\,0 = 1 - 1 = 1^{d} - 1^{d}$$
On the other hand, the number 121 is not 2-self-contained given that:
$$1 \neq 2^{2} - 1^{2}\,\,\text{and}\,\,1\neq 2^{2} + 1^{2}$$
Although it is 1-self-contained. What about the 3-self-contained numbers? Here it comes an example which proves that they exist: 111111112. Undoubtedly, its digits satisfy the following relationships:
$$1 = 2^{3} - 1^{3} - 1^{3} - 1^{3} - 1^{3} - 1^{3} - 1^{3} - 1^{3}\,\,\text{and}\,\,2 = 1^{3} + 1^{3}$$
Even more, this number is also 1,2-self-contained. Really, we have:
$$1 = 2 - 1\,\,\text{and}\,\,2 = 1 + 1;\quad
1 = 2^{2} - 1^{2} - 1^{2} - 1^{2}\,\,\text{and}\,\,2 = 1^{2} + 1^{2}$$
Besides that, this example provides us with a way to build any d-self-contained number:
$$N = \overbrace{11\ldots 1}^{2^{d}}2\Rightarrow 2^{d} = \overbrace{1^{d} + 1^{d} + \ldots + 1^{d}}^{2^{d}}\,\,\text{and}\,\,1 = 2^{d} - \overbrace{1^{d} - 1^{d} - \ldots - 1^{d}}^{2^{d} - 1}$$
Since the definition has been made clear, I would like to make some questions. First of all, does any one can propose any criterion to identify them quickly? In second place, is there any formula which generates them all? And, finally, if we denote the set of d-self-contained numbers by $A_{d}$, does the next proposition hold: $A_{1}\supset A_{2}\supset\ldots\supset A_{k}\supset\ldots$? Thank you in advance for any contribution.
REPLY [2 votes]: Here's an easy answer to the last question: $10127$ is $3$-self-contained ($7 = 2^3 - 1^3$, $2^3 = 1^3+1^3$, $1^3 = 1^3+0^3$, $0=1^3-1^3$), but it's not $2$-self-contained because $7 > 1^2+0^2+1^2+2^2$. Similarly I believe $10129$ and $111129$ are in $A_3 \setminus A_2$. Thus $A_2 \not\supset A_3$, so the infinite descending chain proposition is false.
I suspect that, much like the OEIS sequence that Robert Soupe points to, the structure of $A_d$ for any fixed $d$ is quite simple and can described by a finite automaton. But I hesitate to make any guesses as to how the complexity grows with $d$.<|endoftext|>
TITLE: Axiom of Choice: Where does my argument for proving the axiom of choice fail? Help me understand why this is an axiom, and not a theorem.
QUESTION [54 upvotes]: In terms of purely set theory, the axiom of choice says that for any set $A$, its power set (with empty set removed) has a choice function, i.e. there exists a function $f\colon \mathcal{P}^*(A)\rightarrow A$ such that for any subset $S$ of $A$, $f(S)\in S.$ Is this correct?
My question then is about proving this fact, so that we do not need to put it as an axiom. Now as per the research done on this single object- Axiom of Choice, I believe here that there should be some falsity in my argument. I do not find the mistake.
For any $S\in \mathcal{P}^*(A)$, since $S\neq \emptyset$, $\exists s\in S$. Define $f(S)=s$. Then $f$ is a choice function.
This was showing me that the axiom of choice is proved, but then why it had been put as an axiom? For example, in this book, the author asserts that
It is a metatheorem of mathematical logic that it is impossible to specify the function that assigns to each non-empty subset of $\mathbb{R}$, an element of itself.
There are several notes and books on axiom of choice, but here I am trying to understand through doing some argument for some problem, where problem actually arises.
REPLY [25 votes]: This is a confusing matter, mainly because the kind of reasoning you use in your proof is usually taken to be valid.
However, in order to formalize that reasoning in axiomatic set theory, we need to reduce it to particular symbolic formulas in a formal logic system. And it turns out that the rules of symbolic logic and set theory that are sufficient to express most other kinds of generally accepted proofs can't by themselves express your reasoning.
We declare that this is not the fault of your reasoning, but of the limited logical rules we already have. Then we set out to fix our axiomatic set theory by adding a new rule stating that it's allowed to do what you do. This new rule is the axiom of choice.
So the problem with your proof is not that it doesn't work, from the perspective of ordinary mathematics -- but that what it does is not very interesting. It just says that if we accept this kind of reasoning, then we must conclude that this kind of reasoning works, which doesn't really tell us anything.
What a "proof of the axiom of choice" ought to be would be an argument that even if we don't extend our system with this new rule, we can still prove everything we can prove with the rule. But that means that the proof has to be done with fewer tools than we normally allow ourselves to use.
Otherwise, the end result would be something like claiming that you don't need to buy a hammer for your toolbox, because you can still drive in nails. How? Well, just hit the nail with a hammer ...<|endoftext|>
TITLE: Daunting series of integrals: $\sum_{n=2}^\infty\int_0^{\pi/2}\sqrt{\frac{(1-\sin x)^{n-2}}{(1+\sin x)^{n+2}}}\log(\frac{1-\sin x}{1+\sin x})dx$
QUESTION [15 upvotes]: My coleague showed me the following integral yesterday
\begin{equation}
I=\sum_{n=2}^{\infty}\int_0^{\pi/2}\sqrt{\frac{(1-\sin x)^{n-2}}{(1+\sin x)^{n+2}}}\log\left(\!\frac{1-\sin x}{1+\sin x}\!\right)\ dx=\frac{5}{4}-\frac{\pi^2}{3}\tag1
\end{equation}
He also claimed the following closed-form:
\begin{equation}
J=\int_{2}^{\infty}\int_0^{\pi/2}\sqrt{\frac{(1-\sin x)^{y-2}}{(1+\sin x)^{y+2}}}\log\left(\!\frac{1-\sin x}{1+\sin x}\!\right)\ dx\ dy=-\frac{4}{3}\tag2
\end{equation}
$(1)$ and $(2)$ seem difficult to deal with, but I believe there are some tricks that I can use but I'm not able to spot it yet. Using substitution $x\mapsto\frac\pi2-x$, one gets
\begin{equation}
I=\sum_{n=2}^{\infty}\int_0^{\pi/2}\sqrt{\frac{(1-\cos x)^{n-2}}{(1+\cos x)^{n+2}}}\log\left(\!\frac{1-\cos x}{1+\cos x}\!\right)\ dx\tag3
\end{equation}
and
\begin{equation}
J=\int_{2}^{\infty}\int_0^{\pi/2}\sqrt{\frac{(1-\cos x)^{y-2}}{(1+\cos x)^{y+2}}}\log\left(\!\frac{1-\cos x}{1+\cos x}\!\right)\ dx\ dy\tag4
\end{equation}
but I don't know how to use $(3)$ and $(4)$ to evaluate $(1)$ and $(2)$. I'm quite sure that the main problem here is to evaluate
\begin{equation}
K=\int_0^{\pi/2}\sqrt{\frac{(1-\sin x)^{n-2}}{(1+\sin x)^{n+2}}}\log\left(\!\frac{1-\sin x}{1+\sin x}\!\right)\ dx
\end{equation}
How does one prove $(1)$ and $(2)$?
REPLY [18 votes]: Let's employ the weirdo substitution taught by my brother: $\sin x=\tanh t$. Doing so, one will get
\begin{align}
K&=\int_0^{\infty}\sqrt{\frac{(1-\tanh t)^{n-2}}{(1+\tanh t)^{n+2}}}\ln\left(\frac{1-\tanh t}{1+\tanh t}\right)\ \frac{dt}{\cosh t}\\[10pt]
&=\int_0^{\infty}\sqrt{\left(\frac{\cosh t-\sinh t}{\cosh t+\sinh t}\right)^{n-2}}\frac{\cosh t}{(\cosh t+\sinh t)^2}\ \ln\left(\frac{\cosh t-\sinh t}{\cosh t+\sinh t}\right)\ dt\\[10pt]
&=-\int_0^{\infty} e^{-(n-2)t}\left(e^{-t}+e^{-3t}\right)\ t\ dt\\[10pt]
&=-\frac{1}{n^2+1}-\frac{1}{n^2-1}
\end{align}
Thus, evaluating $I$ and $J$ are easy-peasy-lemon-squeezy.<|endoftext|>
TITLE: Number of positive unequal integer solutions of $x+y+z+w=20$
QUESTION [7 upvotes]: What is the number of positive different integer solutions of $x+y+z+w=20$, where $x,y,z,w$ are all different and positive?
It would be nice if coding is not used. I am given the answer $552$.
REPLY [2 votes]: Here is the best Solution I found when I was preparing for IIT-JEE 2000-2003.
suppose equation is $x + y +z + t = N$
find $k= [(N-2n-4)/2]$, where n is the least integer present is the solution.
for example to find number of solutions in which least number present is $1$, put $n=1$.
Now draw the following diagram
Stop when $1$ comes.
Now If $N$ is even add lower row terms along with middle row terms and multiply the sum with $4!$. This is the number of unequal integral solution of above equation having n as the least integer.
If $N$ is odd add upper row terms along with middle row terms and multiply the sum with $4!$.
Repeat the process for other possible values of $N$.
In above case $n$ can have values $0, 1, 2$, & $3$, if we are interested in finding non-negative unequal integral solutions, because $4 + 5 + 6 + 7 > 20$
for $n=0$, there are $24\times 4!$ solution
for $n=1$, there are $14\times 4!$ solutions
for $n=2$, there are $7\times 4!$ solutions
and for $n=3$, there are $2\times 4!$ solutions.
Total number of non-negative solutions can be obtained by adding all$= 1128$
If you are interested in only positive unequal integral solutions drop the case of n=0, and you get $552$
If you wish to obtain solution for $x, y, z, t > 1$, drop the case $n=0$ and $n=1$ both, and you get $216$.
and so on.......<|endoftext|>
TITLE: Prove the sum of squares of 3 rationals cannot be 7
QUESTION [6 upvotes]: Prove there isn't $r_1, r_2,r_3 \in \mathbb{Q}$ so that ${r_1}^2 +
{r_2}^2 + {r_3}^2=7 \tag1$
From (1) we get $a^2 + b^2 + c^2=7n^2 \tag2$ where $a,b,c,n \in \mathbb{N}$. I have tried playing with parity of these numbers, without success.
UPDATE
Suppose $n$ is even. Then either $a, b, c$ are all even or only one of them, let's say $a$ is even. The latest case is not possible because applying modulo 4 to (2) we get $2=0$. So $a, b, c$ are all even. Repeatedly simplifying by 4, we can reduce this case to $n$ odd.
Now suppose $n$ is odd. Then either $a, b, c$ are all odd or two of them, let's say $a,b$ are even. The latest case is not possible because applying modulo 4 to (2) we get $1=3$.
The only case I cannot cover is $a,b,c,n$ all odd.
REPLY [4 votes]: This is about 2-adic restrictions. First, odd squares of integers are $1 \pmod 8.$ Integer squares can only be $0,1,4 \pmod 8$ in any case. Therefore the sum of three integer squares cannot be $7 \pmod 8.$
Next, if the sum of three squares is divisible by $4,$ so $x^2 + y^2 + z^2 = k$ with $k \equiv 0 \pmod 4,$ then $x,y,z$ must be even so we can divide through and get integers $\left( \frac{x}{2} \right)^2 + \left( \frac{y}{2} \right)^2 +\left( \frac{z}{2} \right)^2 = \frac{k}{4}.$ This is all you need to deal with $x^2 + y^2 + z^2 = 7 n^2$ in integers.
Also worth mentioning Aubry-Davenport-Cassels, there is a geometric proof that, if a number is the sum of three rational squares, it is also the sum of three integer squares. This is presented in Serre's little book.
About $7$ itself, if we have $u^2 + v^2 + w^2 = k$ with $u,v,w$ not all divisible by $7,$ then we can solve $x^2 + y^2 + z^2 = 49k$ with $x,y,z$ all nonzero $\pmod 7.$ That is, we choose one of $(u,v,w)$ or $(-u,v,w)$ or $(u,-v,w)$ or $(u,v,-w)$ (and rename as $(u,v,w)$ again) so that $u + 2 v + 4 w \neq 0 \pmod 7.$ Then we take
$$ x = 3u+6v - 2w, \; \; \; y = -2u+3v +6w, \; \; \; z = 6u -2v +3 w. $$
All are nonzero $\pmod 7.$ This is the rational orthogonal matrix
$$
\frac{1}{7}
\left(
\begin{array}{rrr}
3 & 6 & -2 \\
-2 & 3 & 6 \\
6 & -2 & 3
\end{array}
\right)
$$
as in PALL 1940<|endoftext|>
TITLE: How much time does a great mathematician take to solve an extreme problem?
QUESTION [17 upvotes]: I really love math, and I can spend hours, days or even years to solve a really simple problem if I can't do it. However, there are certain problems, which I am not able to solve in an hour or so. It takes me a lot of time to even do half of the problem. When I am frustrated and finally look at the solution, I feel like it was just some rigorous algebraic manipulation that I wasn't able to do; there was nothing 'new' or different about the problem. For instance, consider this problem:
If $m^2 + M^2 + 2mM\cos\theta=1$, $n^2 + N^2 + 2nN\cos\theta=1$ and $mn
+ MN + (mN+Mn)\cos\theta=0$, then prove that $m^2 + n^2=\text {cosec}^2\theta$.
I was able to do half of this problem, but it took me a very long time. And when I read the solution, it was just some algebraic manipulation that I was not able to do.
Now, what I want to ask is two things:
Is it important for me to spend a lot of time on these kinds of problems, where it doesn't require something new, or is it my fault that I am not able to do these manipulations? How can one improve?
If I give this problem to a great mathematician, then how much time will he take to solve it?
----------Added after question put on hold as "not about mathematics" ---------
The "not about mathematics" is followed by "as defined in the help center". The help center page has three sections: What to ask here; What might be better asked elsewhere ("while still on-topic here"); and What not to ask here. Clearly the closure must be placing the question in the third category. The help centre page begins "And some questions are considered off-topic: " and continues with 5 groups: (1) physics, engineering and financial questions, (2) typesetting questions, (3) numerology, (4) questions seeking personal advice for choosing a course, academic program, career path etc. Such questions should be directed to those employed by the institution in question or other qualified individuals who know your specific circumstances, (5) questions about the site itself should be asked on Mathematics meta instead.
(4) is quoted in full, because this question manifestly does not fit the other parts. The first half of (4) about institutions clearly does not fit this question. The only possible argument is whether the last part about qualified individuals could be generalised to fit this question. That would seem to turn on the reference to "your specific circumstances". Any such argument looks weak, particularly when the two answers do not make any reference to such things.
Finally, there is the question of whether (1)-(5) are just examples and the ban goes wider. Again it is hard to see how a fair reading supports that.
In the other direction, @AlexM. makes a good point in his comment below. Note also that (soft-question) is a standard tag, used 138 times so far this month. More generally the question of how one goes about making an important contribution to maths seems highly relevant to mathematics as a discipline.
REPLY [11 votes]: An important thing is to first find a resolution strategy. Your intuition should tell you how the computation will proceed.
In this case, I noticed that the goal is to eliminate the variables $M$ and $N$, and you can do that by completing the square in the first two equations. So a possible strategy is to explicit $M$ and $N$ and plug them in the third equation and we will find a relation.
$$M^2 + 2mM\cos\theta+m^2\cos^2\theta=(M+m\cos\theta)^2=1-m^2+m^2\cos^2\theta=1-m^2\sin^2\theta.$$
Similarly
$$(N+n\cos\theta)^2=1-n^2\sin^2\theta.$$
Then I breathed a little (instead of rushing to the obvious solution of expliciting $M$ and $N$ completely) and noticed that the LHS of the third relation was very close to the product of the LHS, so (without taking the square roots prematurely)
$$(1-n^2\sin^2\theta)(1-m^2\sin^2\theta)=(M+m\cos\theta)^2(N+n\cos\theta)^2=(MN+Mn\cos\theta+Nm\cos^2\theta+mn\cos^2\theta)^2=(0-mn\sin^2\theta)^2$$
and from there the claim.
It's all about practicing, observing and spotting familiar patterns.
PS: I am no great mathematician.<|endoftext|>
TITLE: Prove a group of order 12 must have an element of order 2
QUESTION [8 upvotes]: Question: Prove that a group of order 12 must have an element of order 2.
I believe I've made great stride in my attempt.
By corollary to Lagrange's theorem, the order of any element $g$ in a group $G$ divides the order of a group $G$.
So, $ \left | g \right | \mid \left | G \right |$.
Hence, the possible orders of $g$ is $\left | g \right |=\left \{ 1,2,3,4,6,12 \right \}$
Suppose $\left | g \right |=12.$
Then, $g^{12}=\left ( g^{6} \right )^{2}=e.$
So, $\left | g^{6} \right |=2$
Using the above same idea and applying it to $\left | g \right |=\left \{ 6,4,2 \right \}$ and $\left | g \right |=1,$
we see that these elements g have order 2.
However, for $\left | g^{3} \right |$, the group $G$ does not require an element of order 2.
How can I take this attempt further?
Thanks in advance. Useful hints would be helpful.
REPLY [15 votes]: Hint: Here is a simple proof idea that every group of even order must have an element of order $2$.
Pair every element in $G \backslash \{ e \}$ with its inverse. If all pairs consist of two different elements then $G \backslash \{ e \}$ would have an even number of elements.
What does it mean that $a=a^{-1}$?
$a=a^{-1} \Leftrightarrow a^2=e$. And since $a \neq e$ we get that $ord(a)=2$.<|endoftext|>
TITLE: Explanation needed for a statement about power series convergence
QUESTION [5 upvotes]: I got a task in front of me but I don't really understand it. If someone could explain, I think I would be able to solve it myself.
$P(x) = \sum_{k=0}^{\infty}a_{k}x^{k}$ is a power series. There exists a $k_{0} \in \mathbb{N}$ with $a_{k} \neq 0$ for all $k \geq k_{0}$.
Proof that: If the sequence $\left ( \left | \frac{a_{k+1}}{a_{k}} \right | \right )_{k \geq k_{0}}$ converges towards a number in $\mathbb{R}$ or towards $\infty$ and if $a:= \lim_{k\rightarrow \infty} \left | \frac{a_{k+1}}{a_{k}} \right | \in \mathbb{R} \cup \left \{ \infty \right \}$ indicates this limit point ($\infty $ or $-\infty$) then following applies for the radius of convergence $R$ of $P$:
$R=\left\{\begin{matrix}
0, & a = \infty\\
\infty, & a = 0 \\
\frac{1}{a}, & otherwise
\end{matrix}\right.$
What is meant by $k_{0}$ ? It's just any unknown variable which seems to be smaller or equal $k$, right? Oh and it cannot be smaller than zero.
What is $a_{k}$ ? It's just any sequence that cannot be zero, right?
So first I take the sequence $a_{n}$, use the ratio test to see if it converges. Okay after that is done, I check if in the ratio test, I get + or - $\infty$.
Is it right so far?
But what confuses me most is this:
$a:= \lim_{k\rightarrow \infty} \left | \frac{a_{k+1}}{a_{k}} \right | \in \mathbb{R} \cup \left \{ \infty \right \}$
What is it saying with infinity?
Sorry I haven't started with the task but first I try to understand everything, then start.
REPLY [3 votes]: Hint: Since you've gotten explanation of the notation already, here's the sketch of how to prove the result. Radius of convergence means that the power series converges for anything strictly inside. So if $|x| = r < \frac1a$, then let $s$ be such that $r < s < \frac1a$, and so as $k \to \infty$ eventually $|\frac{a_{k+1}}{a_k}| \to a$ and hence $|\frac{a_{k+1}}{a_k}| < s^{-1}$, which by induction gives $|a_{m+k}| < |a_m| s^{-k}$ for every natural $k$ where $m$ is some (sufficiently large) constant natural number. Therefore for any natural $q \ge p \ge m$ we have $| \sum_{k=p}^q a_k x^k | \le \sum_{k=p}^\infty |a_k| r^k = \sum_{k=p}^\infty |a_m| s^{m-k} r^k = |a_m| s^m \sum_{k=p}^\infty |a_m| (\frac{r}{s})^k$ which is finite since $\frac{r}{s} < 1$. Thus by Cauchy convergence the original power series converges.<|endoftext|>
TITLE: Does a set of $n+1$ points that affinely span $\mathbb{R}^n$ lie on a unique $(n-1)$-sphere?
QUESTION [11 upvotes]: In $\mathbb{R}^2$ every three points that are not colinear lie on a unique circle. Does this generalize to higher dimensions in the following way:
If $n+1$ element subset $S$ of $\mathbb{R}^n$ does not lie on any linear manifold (flat) of dimension less than $n$, then there is a unique $(n-1)$-sphere containing $S$.
If not, then what would be the proper generalization?
REPLY [2 votes]: Why not just apply a circular inversion? If we have $p_0,p_1,\ldots,p_n\in\mathbb{R}^n$ in general position, we may consider $q_1,q_2,\ldots,q_n$ as the images of $p_1,p_2,\ldots,p_n$ under a circular inversion with respect to a unit hypersphere centered at $p_0$. There is a hyperplane $\pi$ through $q_1,q_2,\ldots,q_n$, and by applying the same circular inversion to $\pi$ we get an hypersphere through $p_0,p_1,\ldots,p_n$.
The uniqueness part is easy.<|endoftext|>
TITLE: Haar measure, can image of modular function be any subgroup of $(0,\infty)$?
QUESTION [6 upvotes]: It is easy to find examples of locally compact second countable Hausdorff topological groups $G$ whose modular function $\Delta$ has image $\{1\}$ or $(0,\infty)$. Are there groups $G$ of this kind for which the image of $\Delta$ is anything else?
REPLY [5 votes]: Fix a prime $p$, and consider the group $G$ of affine automorphisms of $\mathbb{Q}_p$. That is, take $G=(\mathbb{Q}_p\setminus\{0\})\times\mathbb{Q}_p$ and make it a group by identifying $(a,b)\in G$ with the map $x\mapsto ax+b$ from $\mathbb{Q}_p$ to itself. Writing $\mu$ for the usual additive Haar measure on $\mathbb{Q}_p^2$, we can identify the Haar measures on $G$ as follows. Note that left translation by $(a,b)$ sends $(c,d)$ to $(ac,ad+b)$ and this map multiplies $\mu$-measures of sets $|a|_p^2$ (since multiplication by $a$ on $\mathbb{Q}_p$ multiplies measures by $|a|_p$, and we are multiplying both coordinates by $a$). It follows that the measure $\mu/|a|_p^2$ is left-invariant on $G$, and so is a left Haar measure. On the other hand, right translation by $(a,b)$ sends $(c,d)$ to $(ac,bc+d)$ which multiplies $\mu$-measures only by $|a|_p$, so $\mu/|a|_p$ is a right Haar measure.
It follows that the modular function of $G$ is $\Delta(a,b)=1/|a|_p$. In particular, the image of $\Delta$ is $\{p^n:n\in\mathbb{Z}\}$.<|endoftext|>
TITLE: A field in which every element (that is not 1 or 0) is a root of -1
QUESTION [7 upvotes]: Let $\mathbb{F}$ be a field with $char(\mathbb{F}) \neq 2$ such that for every element $q \in \mathbb{F}$ if $q \neq 0$ and $q \neq 1$ then there is a power n such that $q^n = -1$. (E.g. $\mathbb{F}_3$, $\mathbb{F}_5$, $\mathbb{F}_{9}$, $\mathbb{F}_{17}$, ...).
Obviously $char(\mathbb{F}) > 0$ (since $\mathbb{F}$ cannot have a $\mathbb{Q}$ subfield).
Furthermore, I guess in finite fields this happens if and only if $|\mathbb{F}| = 2^k+1$ for some k (since in this case $\mathbb{F}^*$ is a cyclic group of order $2^k$ and -1 lies in every non-trivial subgroup)
Of course $\mathbb{F}$ is not algebraically closed (since every element has even multiplicative order and thus $x^{(2n+1)} - 1$ has only 1 root $\forall n$).
But can $\mathbb{F}$ be infinite?
Also is there an infinite number of (finite) fields with this property?
REPLY [6 votes]: The number of such fields is likely finite (up to isomorphy) but it is not know unconditionally.
The complete list can be described as: the fields with cardinality a Fermat prime, i.e., a prime of the form $2^k+1$, or $9$. (The former is likely finite yet noone knows.)
As you observed correctly the condition for a finite field is that the order of $|F^{\times}|$ is a power of two.
To see this it suffices to note that $-1$ always has order $2$ in this group, so if there is an element of odd order in this group then no power of it can have order $2$. Thus the order of the group cannot be divisible by any odd prime. Conversely, as you said if the order is a power of two then $-1$, the unique element of order $2$, is contained in every non-trivial subgroup and thus a power of it is $-1$.
Now, it is unknown if there are infinitely many primes of the form $2^k +1$, but the answer is likely no.
What about prime powers, there the only solution is $3^2$; this is a consequence of Catalan's conjecture (proved by Mihailescu), but that special case was known before.
Clearly no field containing a transcendental element can have your property, and every infinite algebraic field will contain as subfield a finite field not in our list.<|endoftext|>
TITLE: Finding nilpotent elements in a quotient ring.
QUESTION [6 upvotes]: Which are nilpotent elements of $\mathbb{Q}[x]/(x^5-3x^2)\times\mathbb{Z}/(12)$?
I tried to decompose in this way: $$\mathbb{Q}[x]/(x^5-3x^2)\times\mathbb{Z}/(12)\cong\mathbb{Q}[x]/(x^2)\times\mathbb{Q}[x]/(x^3-3)\times\mathbb{Z}/(3)\times\mathbb{Z}/(4)$$ so i thought that nilpotent elements are only:
$$(0,0,0,2), (x,0,0,2) \ \ \mbox{and} \ \ (x,0,0,0).$$
I don't know if I am right, because i tried another approach considering the intersetion of all prime ideals of that ring and i don't know to understand if the result is the same.
REPLY [3 votes]: The nilpotent elements of the product are obtained as the tuples of the nilpotent elements of single factors.<|endoftext|>
TITLE: Diffuse-like decomposition of the segment $[0,1]$ in accordance with Lebesgue measure
QUESTION [5 upvotes]: Consider the segment $[0,1]\subset\mathbb{R}$ and the standard Lebesgue measure $\mu$ on $\mathbb{R}$. I wonder
if we can find such decomposition $A\sqcup B=[0,1]$, that for any subsegment $[a,b]\subset[0,1]$ we'd have $\mu(A\cap [a,b])=\mu(B\cap [a,b])$?
More particular case is that for any $x\in[0,1]$ we have $\mu(A\cap [0,x])=\mu(B\cap [0,x])$ and hence $\mu(A\cap [0,x])=\frac{x}{2}$.
Metaphor. Such decomposition can be assosiated with mixture of liquids. Suppose we have two liquids of equal amount. We bottle them, shake them up and then pour some into the glass. No matter how much we pour, the glass will contain equal amount of both liquids.
Ideas. We can decompose $[0,1]$ as $X\sqcup Y$ where $X=[0,1]\setminus\mathbb{Q}$ and $Y=[0,1]\cap\mathbb{Q}$. Here we have $\mu(X)=1$ and $\mu(Y)=0$. Maybe it's possible to describe a procedure of moving points from $X$ to $Y$ (i.e. excluding them from $X$ and including in $Y$) that will lead to desired decomposition. Another thought is to set $A_n=\bigsqcup_{k=0}^{2^{n-1}-1}[2k\cdot2^{-n},\ (2k+1)\cdot2^{-n})$ and $B_n=[0,1]\setminus A_n$ for all $n\in\mathbb{N}$. Then for any $n\in\mathbb{N}$ we'll have
$A_n\sqcup B_n=[0,1]$
$\mu(A_n)=\mu(B_n)$
$|\mu(A\cap [a,b])-\mu(B\cap [a,b])|\leq 2^{-n}$ for any $[a,b]\subset[0,1]$
I wonder if we can take some kind of limit here raising $n\longrightarrow\infty$.
Origin. This question arose after I read this post. It's interesting whether the condition of Riemann integrability is crucial there or we could weaken it with Lebesgue integrability. In attempts to find counterexample I thought of above mentioned decomposition. If it exists then we could define $f(x)=1$ if $x\in A$ and $f(x)=-1$ if $x\in B$. That would be our counterexample because $\int_I f\,d\mu$ would be zero for any interval $I\in[0,1]$.
Generalization. Suppose $W\subset\mathbb{R}^n$ is a Lebesgue measurable set, and $\{r_i\}_{i=1}^k\subset\mathbb{R}_+$ such that $r_1+\ldots+r_k=\mu(W)$. Is it possible to choose such decomposition $A_1\sqcup\ldots\sqcup A_k=W$ that for any measurable $V\subset W$ we'd have $\mu(A_i \cap V)=r_i\cdot\frac{\mu(V)}{\mu(W)}$ for any $1\leq i\leq k$.
In other words, can we uniformly "blend" measurable subsets of a measurable set in any proportions? It seems to me like a very interesting result, provided it's true.
REPLY [3 votes]: The answer is that we cannot find such subsets $A$ and $B$. Of course, $A$ and $B$ need to be Lebesgue measurable so that $\mu (A)$ and $\mu(B)$ are well-defined.
Let us defined $\nu (C) = \mu (A \cap C)$ for all Lebesgue measurable subsets $C$. Then:
$\nu$ is a measure on Lebesgue-measurable sets, as it satisfies all axioms;
$\nu(C) = \mu(C)/2$ for all open intervals.
But open intervals generate the Borel sigma-algebra, so $\nu = \mu/2$ on Borel sets. Then their completion also satisfy this relation, so $\nu = \mu/2$ on Lebesgue measurable sets.
But then $\mu(A)/2 = \nu (A) = \mu (A \cap A) = \mu(A)$, so $\mu(A) = 0$, which contradicts our hypotheses.
If you want to describe a mixture of liquids exactly in proportion $1-1$, the most natural way is to define a function $f$ as the (local) proportion of the first liquid in the mixture. For a homogeneous mixing in equal parts, one would have $f \equiv 1/2$ almost everywhere, and indeed $\int_a^b f(t) dt = (b-a)/2$ is the quantity of the first liquid in the mix between points $a$ and $b$.
This point of view also behaves very well with respect to your "limiting process". Let $f_n = 1_{A_n}$ be the repartition of the first liquid. Then, for any measurable and bounded function $g$,
$$\lim_{n \to + \infty} \int_0^1 f_n (x) \cdot g(x) dx = \int_0^1 \frac{1}{2} \cdot g(t) dt,$$
which is to say, the sequence of functions $(f_n)$ converges weakly in $\mathbb{L}^1 ([0,1],\mu)$ to the function $f \equiv 1/2$. So, in this sense at least, the function $f$ is indeed what you get as a limit of your mixing process.
N.B.: If you want to prove the convergence above, and elementary way if to prove it when $g = 1_{[a,b]}$ for some $a
TITLE: Which concepts in Differential Geometry can NOT be represented using Geometric Algebra?
QUESTION [5 upvotes]: 1. It is not clear to me that linear duals, and not just Hodge duals, can be represented in geometric algebra at all. See, for example, here.
Can linear duals (i.e. linear functionals) be represented using the geometric algebra formalism?
2. It also seems like most tensors cannot be represented, see for example here. This makes intuitive sense since any geometric algebra is a quotient of the corresponding tensor algebra. Also it seems like only some contravariant tensors (and no covariant tensors whatsoever) can be represented unless the answer to 1. is no.
Which types of tensors admit a representation using geometric algebra?
3. The exterior algebra under which differential forms operate can clearly be represented by geometric algebra and its outer product.
However, do objects "sufficiently isomorphic" to differential forms admit a representation in geometric algebra?
This would be preferred given how geometric algebra is more geometrically intuitive than differential forms. See also here.
4. This question seemingly depends on the answers to 1. and 3., since derivations=vector fields are the linear dual of differential forms.
Can vector fields=derivations be represented using geometric algebra?
This paper seems to suggest that the answer is yes, although it was unclear to me. It also listed as references Snygg's and Hestene's books for representing derivations=vector fields via geometric algebra. However, I quickly searched Snygg's book and could not even find the use of the word "bundle" once, which seems to cast doubt on the claim.
Moreover, derivations are just the Lie algebra of smooth functions between manifolds, correct? Since Lie algebras are non-associative, it seems doubtful to me that derivations could be represented effectively by the associative geometric algebra. On the other hand, quaternions are somehow also Lie algebras, and they can be represented in geometric algebra, so I am not sure.
5. This probably a duplicate of 4. but I am asking it anyway.
Do tangent/cotangent spaces/bundles admit a representation using geometric algebra?
This one is especially unclear to me, since using "ctrl-f" the word "bundle" is not used even once in Snygg's book "Differential Geometry via Geometric Algebra", which appears to be the most thorough treatment of the subject.
(Incidentally, the word "dual" also only appears once, in reference to Pyotr Kapitza's dual British and Russian citizenship.)
Basically I am wondering if differential geometry can be "translated" completely using the language of geometric algebra. I think the answer is no because Hestene's conjecture regarding smooth and vector manifolds has yet to be proved (see the comments here), but it seems like we would run up with barriers even sooner than that. Although I probably am misunderstanding the comment.
I have found differential geometry difficult to understand at times, and would like to learn it by translating it as much into geometric algebra and then back. The extent to which the two is "equivalent" obviously presents a barrier to how much this is possible. Still, I already feel like I understand the concepts and motivations of multilinear algebra and related fields much better after having just learned a little geometric algebra, and would like to apply this as much as possible to the rest of differential geometry.
These questions are also related: symmetric products are the inner product from geometric algebra, and wedge products are the outer product from geometric algebra; geometric algebra is a special type of Clifford algebra which contains the exterior algebra over the reals; and this question discusses derivations in algebras in detail.
REPLY [5 votes]: Can linear duals (i.e. linear functionals) be represented using the geometric algebra formalism?
Yes and no.
In geometric algebra, dual vectors can be computed through Hodge duality. Let $\{u_1, u_2, \ldots, u_n\}$ be an orthogonal basis set for an $n$-dimensional vector space. Let $I$ be their geometric product, which is grade-$n$ due to orthogonality. Then $u^i = I u_i$ is, within a scale factor, a unique vector such that $u^i \cdot u_j = 0$ for $i\neq j$ but $u^i \cdot u_i \neq 0$ for nonzero $u_i$. Do a little more work normalizing $I$, and you would get the correct vector that corresponds to the element of the dual space that is dual to $u_i$.
So, geometric algebra lets you compute those vectors, but linear functionals themselves--as functions--have no place in the algebra. The algebra has elements and functions of elements, but I would hesitate to say that linear functionals are elements of the algebra.
That said, you can also construct a geometric algebra over the dual space.
Which types of tensors admit a representation using geometric algebra?
Any that you can suggest an isomorphism between tensors of that form and the algebra itself.
...yes, I know that borders on a non-answer, but let me give an example.
For instance, the linear map $T(a) = B \cdot a$ for vector $a$ and bivector $B$ is a tensor, moreover a linear operator. It's clear that this tensor directly, and uniquely, corresponds to $B$. $B$ entirely determines the action of the tensor.
Contrast this against the form of a general linear operator, $T(a) = \sum_i^n (a \cdot u^i) v_i$ for a basis set $u_i$ and some other set $v_i$, and you see that there is no such direct correspondence in the general case.
However, do objects "sufficiently isomorphic" to differential forms admit a representation in geometric algebra?
That's an easy one. You can write a $k$-form as a $k$-covector field. Any differential form can be written in terms of the algebra--perhaps with exception of "vector-valued forms" and other such things, but these are no more complicated in geometric calculus than they are in traditional differential forms. Doran and Lasenby or Hestene and Sobczyk both have extensive chapters on calculus with GA.
Can vector fields=derivations be represented using geometric algebra?
No, with a caveat: the geometric algebra is merely an algebra. It does not care what the underlying vector space is that it is built upon. It does not care whether vector fields are actually derivations.
So, GA can't represent vector fields being derivations because such a consideration is wholly separate from it.
In other words, if you want to take the wedge product of two vector fields and interpret that as meaning something in terms of derivations, that's on you. All GA says is that, if there is a meaningful metric you can impose on the vectors in this vector space, you can build a geometric algebra on it.
Do tangent/cotangent spaces/bundles admit a representation using geometric algebra?
The geometric algebra and its calculus can represent vector fields, but I'm not aware of any construction that allows it to invert things and recover the tangent bundle.
However, if I had to guess, I would say such a thing is probably the inverse of the unit pseudoscalar function on a manifold. Such a function is from $M$ to a grade-$n$ multivector, where $n$ is the dimension of $M$. Inverting this map would yield a map from a pseudoscalar to the manifold, which seems almost exactly like the tangent bundle. Such a function, however, would rely on the pseudoscalar admitting an inverse, which it might not do globally, and I can only imagine this making sense in terms of an embedding.
So where do we stand?
In my opinion, geometric algebra and calculus is more than capable of serving as a full foundation for someone studying differential geometry. Even if you throw away the notion of Hestenes' vector manifolds, you can still use geometric algebra and calculus to compute relations between vector fields or between differential forms. You can translate any differential forms expression into geometric algebra, and general tensors that don't correspond to GA elements can still be represented as linear functions on those elements instead.
There's already been considerable work on the relationship between GA/GC and differential geometry. I recommend Doran and Lasenby for this; they have an in-depth chapter building on Hestenes' vector manifold theory, in which they develop and expand on much of the calculus of GA. But moreover, they also have a chapter on general relativity, in which they develop an alternative to curved spaces for differential geometry, preferring "gauge fields" on flat manifolds instead. This method is superficially very similar to moving frames, and they use it to generate GA equivalents of the Cartan structure equations.<|endoftext|>
TITLE: Is it necessary to prove everything and solve every problem in the books?
QUESTION [22 upvotes]: I am an undergraduate really passionate about the mathematics and microbiology. I have few big problems in learning which I would like to seek your advice.
Whenever I study mathematical books (Rudin, Hoffman/Kunze, etc.), I always try to prove every theorem, lemma, corollary, and their relationships in the book. Unfortunately, that determination has been demanding huge time consumption; sometimes, it takes me days to fully understand and able to prove materials in the few pages of book. I am willing to devote my time to understand the topics, but I also wanted to devote time to my undergraduate research projects and other courses. Recently, I started to depend a lot more to the proofs presented in books and websites (like MSE), which has been causing a huge guilt and fear that I am not making the knowledge into my own.
Despite my effort to prove/solve every problem per chapter, I found myself to skip some of the problems and move on to the next chapter, which resulted huge fear as that means I did not fully understand the materials..
How do you read the mathematics books and make knowledge on your own?
Is it absolutely recommended to prove everything and solve every problems in the book?
Also is it recommended to devote more time to the problems than exposition preceding the problems? I found myself devoting a lot time to the actual expositions in the book as I like to play around with definitions and theorems, try to come up with my own ideas, and formulate my own problems (I actually found that making my own problems is much more fun than problems presented in the book).
REPLY [10 votes]: Overall answer: no.
I struggled with the same problem as you for quite a long time and, in hindsight, I think I could have spent my time more wisely. Here are my current general guidelines at the time of this post. They may or may not work for you. The overall philosophy I employ is that exercises are usually there to get you comfortable with the material: you are probably usually expected to remember the theorems and (mostly) forget the exercises once you have done them. I expound on this below.
How do you read the mathematics books and make knowledge on your own?
Nowadays I spend more time thinking about the material in the text than doing exercises. I treat the theorems as problems and try to see how far I can get in proving them before reading the proof. Importantly, I don't spend all day doing this! A good strategy might be to set yourself a goal, such as "I want to get to such-and-such page by the end of the week", and pace yourself accordingly; if you spend thirty to forty-five minutes thinking hard about a theorem and having no ideas, perhaps take a peek at the proof and continue from there (alternatively, skip that proof and come back to it later).
If the book has examples in the text, read and understand as many as you can. Otherwise, be sure to allocate generous amounts of time towards coming up with your own examples. Note that this is not necessarily easy, and is arguably the most important stage.
Finally, if the book has exercises, be sure to do at least some of them and try to understand the general idea of the rest. Don't be afraid to ask and try to answer your own questions which might be inspired by some of the exercises.
One downside to this method is that it doesn't work for all books and all subject matters. Some authors place key theorems in the exercises and proceed to use them later, expecting you to have proven them yourself. The hope is that for each topic there exists a book for which this method is not useless.
Another downside is that, in rare cases, too many of the exercises are interesting (this appears to be the case with Rudin, among other books)! In this case it's definitely up to you how much time you spend on the exercises. Allocate time according to how interesting you find the exercises and how comfortable you are with the material in the text.
Is it absolutely recommended to prove everything and solve every problems in the book?
If you can do this and still lead a comfortable life, then by all means do so (if it doesn't hurt, then it can only help, right?). Unfortunately, attempting to do this will probably make life less than comfortable for you, so I would advice against being this extreme.
However, the point of this "advice" is that you should get as comfortable with the theorems and the examples as much as you possibly can, and this is good advice. I would advise just thinking about stuff as much as possible. Thinking about maths in the shower, on the way to the shops, while cooking dinner, etc. will get you used to thinking about the topics that interest you.
Also is it recommended to devote more time to the problems than exposition preceding the problems?
This heavily depends on both the book and the reader. However, if you prefer to come up with your own examples and fiddle around with the theorems, and if the book you are working from is not expecting you to prove key theorems in the exercises, then I would say this approach is a good substitute for doing exercises.<|endoftext|>
TITLE: Probability of choosing $n$ numbers from $\{1, \dots, 2n\}$ so that $n$ is 3rd in size
QUESTION [8 upvotes]: We uniformly randomly choose $n$ numbers out of $2n$ numbers from the group $\{1, \dots, 2n\}$ so that order matters and repetitions are allowed. What is the probability that $n$ is the $3^{\text{rd}}$ number in size in the chosen series? (= there are only two bigger than $n$)Note that if a number that is bigger than $n$ was chosen more than once, we still count it as one number bigger than $n$.
Example: $n = 5, \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10\}$
$7, 1, 5, 9, 9$ - ok
$2, 2, 10, 5, 7$ - ok
$6, 6, 3, 5, 5$ - not ok, 5 is $2^{\text{nd}}$ in size
My lead was to first choose 2 numbers that are bigger than $n$ out of the n numbers that are bigger than $n$ in the group $\{1, \dots, 2n\}$. Then in order to complete a series of $n$ numbers I need to choose $n-3$ more from the numbers that are equal or smaller than $n$, but then I figured this is not right as a series could be formed from the 2 numbers bigger than $n$ and $n$ itself without any additional numbers. At this point I have no additional leads and could very much use your assistance and guidance.
REPLY [2 votes]: Choose the two values from $[n+1,2n]$ above $n$ to get
$${n\choose 2}.$$
Let $q$ be the additional values from below $n$ that are present
(other than the three we have selected). We then have $0\le q\le n-1.$
Counting these configurations we obtain
$${n\choose 2}
\sum_{q=0}^{n-1} {n-1\choose q} \times {n\brace q+3} \times (q+3)!$$
Recall the species of set partitions which is
$$\mathfrak{P}(\mathcal{U}\mathfrak{P}_{\ge 1}(\mathcal{Z}))$$
so that the bivariate generating function is
$$G(z, u) = \exp(u(\exp(z)-1))$$
which finally yields
$${n\brace q} =
n! [z^n] \frac{(\exp(z)-1)^q}{q!}.$$
This yields for our sum
$${n\choose 2} n! [z^n]
\sum_{q=0}^{n-1} {n-1\choose q}
\times \frac{(\exp(z)-1)^{q+3}}{(q+3)!}
\times (q+3)!
\\ = {n\choose 2} n! [z^n]
\sum_{q=0}^{n-1} {n-1\choose q}
\times (\exp(z)-1)^{q+3}
\\ = {n\choose 2} n! [z^n] (\exp(z)-1)^3
\exp((n-1)z)
\\ = {n\choose 2} n! [z^n]
(\exp((n+2)z)-3\exp((n+1)z)+3\exp(nz)-\exp((n-1)z)).$$
Extracting coefficients we have
$${n\choose 2}
\left((n+2)^n - 3(n+1)^n + 3n^n - (n-1)^n\right)$$
for a probability of
$$\frac{1}{(2n)^n} {n\choose 2}
\left((n+2)^n - 3(n+1)^n + 3n^n - (n-1)^n\right).$$<|endoftext|>
TITLE: Why is $\int_{-1}^{1} \frac{1}x \mathrm{d}x$ divergent?
QUESTION [8 upvotes]: Isn't
$$\int_{-1}^{1} \frac{1}x \mathrm{d}x=\lim_{\epsilon\to 0^{+}} \int_{-1}^{-\epsilon} \frac{1}x \mathrm{d}x+\int_{-\epsilon}^{\epsilon} \frac{1}x \mathrm{d}x+\int_{\epsilon}^{1} \frac{1}x \mathrm{d}x=0 ?$$
REPLY [20 votes]: First, $\frac 1 x$ isn't defined on $[-1,1]$ (because of what happens in $0$). You could get around this by considering it defined on $[-1,0) \cup (0,1]$, but then you've got another problem, much more serious: the function is unbounded, and the concept of "Riemann integral" is defined only for bounded functions (and bounded intervals).
Finally, you might try to use the concept of "improper integral of the second kind". This doesn't work, either: $\int \limits _{-1} ^1 \frac 1 x \ \Bbb d x = \int \limits _{-1} ^0 \frac 1 x \ \Bbb d x + \int \limits _0 ^1 \frac 1 x \ \Bbb d x = \infty - \infty$ which is indeterminate.
What you are trying to do is to give a meaning to that integral using the concept of "principal value", in the framework of distribution theory. But this is clearly not the same as saying that your function is integrable with integral $0$.<|endoftext|>
TITLE: Proof that Epicycloids are Algebraic Curves?
QUESTION [5 upvotes]: Epicycloids are most commonly described by the parametric equations,
$x(t) = (R + a)\cos(t) – a \cos \left(\frac{R + a}{a} t \right),$
$y(t) = (R + a)\sin(t) – a \sin \left(\frac{R + a}{a} t \right).$
Where $R$ is the radius of the fixed circle and $a$ is the radius of the rolling circle.
With $R = ka$ we also have,
$x(t) = a[(k + 1)\cos(t) - \cos((k + 1)t)],$
$y(t) = a[(k + 1)\sin(t) - \sin((k + 1)t)].$
Several books discussing epicycloids mention that if the ratio of the radii of the circles $\left( \frac{R}{a} = k \right)$ is rational, then they are algebraic curves. However, I’ve only been able to find the Cartesian equations for the cardioid, nephroid and ranunculoid. With the cardioid being,
$(ax + x^2 + y^2)^2 = a^2(x^2 + y^2).$
The nephroid,
$(-4a^2 + x^2 + y^2)^3 = 108a^4y^2.$
And the ranunculoid,
$-52521875a^{12} – 1286250a^{10} (x^2 + y^2) – 32025a^8 (x^2 + y^2)^2 + 93312a^7 (x^5 – 10x^3y^2 + 5xy^4) – 812a^6 (x^2 + y^2)^3 – 21a^4 (x^2 + y^2)^4 – 42a^2(x^2 + y^2)^5 + (x^2 + y^2)^6 = 0.$
Clearly this doesn’t cover all epicycloids where $\frac{R}{a}$ is rational.
What is the proof that shows that epicycloids, where the ratio of the radii are rational, are algebraic curves?
REPLY [2 votes]: If the number is rational, using multiplication formulas for $\cos$ and $\sin$ you can express the original parametrization in terms of $\cos$ and $\sin$ of multiples of a single argument. Then you can use a rational parametrization of the circle to replace those $\cos$ and $\sin$ by rational functions of the parameters, and then you can eliminate the parameter.
The following Mathematica code does this:
R = 1;
a = 4;
u[t_] := (R + a) Cos[t] - a Cos[(R + a) t/a];
v[t_] := (R + a) Sin[t] - a Sin[(R + a) t/a];
Eliminate[
TrigExpand[{x - u[t] == 0, y - v[t] == 0}
/. t -> a s]
/. {Cos[s] -> (1 - m^2)/(1 + m^2), Sin[s] -> 2 m/(1 + m^2)},
{m}
]
With $R=1$ and $a=3$, we get the equation
$$x^8+x^6 \left(4 y^2-84\right)+x^4 \left(6 y^4-252 y^2+1974\right)+x^2 \left(4 y^6-252 y^4+3948
y^2-10612\right)+13824 x=-y^8+84 y^6-1974 y^4+10612 y^2+5103$$
With $R=1$ and $a=4$,
$$x^{10}+x^8 \left(5 y^2-180\right)+x^6 \left(10 y^4-720 y^2+10710\right)+x^4 \left(10 y^6-1080
y^4+32130 y^2-231060\right)+x^2 \left(5 y^8-720 y^6+32130 y^4-462120 y^2+1230705\right)-1600000
x=-y^{10}+180 y^8-10710 y^6+231060 y^4-1230705 y^2-58982$$<|endoftext|>
TITLE: Proof of $(\neg A \supset A) \supset A$
QUESTION [5 upvotes]: As a (total) beginner in logic, I read this introduction : http://www.loria.fr/~roegel/cours/logique-pdf.pdf (in french). They give an exercise I couldn't achieve. Could someone help me (give an answer or just a clue)?
Using substitution, modus ponens and these axioms :
A1 : $(A\lor A)\supset A$
A2 : $B \supset (A \lor B)$
A3 : $(A\lor B) \supset (B \lor A)$
A4 : $(A\lor (B\lor C)) \supset (B\lor(A\lor C))$
A5 : $(B \supset C) \supset ((A\lor B) \supset (A \lor C))$
Prove : $(\neg A \supset A) \supset A$
I tried many combinations of these axioms and rules of inference but not the good one(s).
Thank you
Edit : Here, logical implication $P \supset Q$ is an abbreviation for $\neg P \lor Q$ and $\neg$ is a primitive.
Exercise is left undone on page 19 : "Nous laissons à titre d’exercice la preuve du troisième axiome de Lukasiewicz." $\Rightarrow$ "Prove the third Lukasiewicz axiom using Whitehead and Russell axioms" (page 18)
REPLY [3 votes]: Double-negation introduction: $B\supset \neg\neg B$. You know how to derive $\neg B\supset \neg B$, which is an abbreviation for $\neg\neg B\lor \neg B$. Then A3 gives you $\neg B\lor \neg\neg B$, which is the same as $B\supset \neg\neg B$.
Double-negation elimination: $\neg\neg A \supset A$. You know how to derive $A\supset A$, which is just $\neg A\lor A$. Now apply double-negation introduction (with $B=\neg A$) to the left-hand side of that (using A3-A5-A3), giving $\neg\neg\neg A\lor A$, which is the same as $\neg\neg A\supset A$.
Proof by contradiction: $(\neg A \supset A)\supset A$. The premise $\neg A\supset A$ is the same as $\neg\neg A\lor A$. By A3-A5-A3 with double-negation elimination, this implies $A \lor A$, and A1 then gives you $A$.<|endoftext|>
TITLE: Intersection of Compact sets Contained in Open Set
QUESTION [5 upvotes]: Just wanted to see if my proof of the following is valid:
Let $\{K_i\}_{i=1}^{\infty}$ be compact sets (in some metric space), and let $V$ be an open set such that $$ \bigcap_{i=1}^{\infty} K_i \subset V.$$ Then there exists $m$ such that $$\bigcap_{i=1}^{m} K_i \subset V.$$
Proof: Suppose not. Then for each $n$, there exists $$x_n \in \bigcap_{i=1}^{n} K_i \cap V^c.$$
Let $\{x_n\}_{n=1}^{\infty}$ be the sequence so formed. In particular, this is a sequence in $K_1$ and thus has a convergent subsequence with limit $\hat{x} \in K_1$. Relabel this convergent subsequence as $\{x_n\}_{n=1}^{\infty}$. Now, there exists $k(j)$ so that $\{x_n\}_{n=k(j)}^{\infty} \subset K_j$, and again has a convergent subsequence that converges to $\hat{x} \in K_j$. Since this holds for any $j$, $$\hat{x} \in \bigcap_{i=1}^{\infty} K_i.$$ Since $V^c$ is closed, $\hat{x} \in V^c$, a contradiction.
And maybe a hint as to how to proceed for a purely topological proof?
REPLY [4 votes]: It’s basically correct. There’s a typo at the very beginning, where you meant to write ‘For each $n$ there exists’ (instead of $j$). And you need to pass to a convergent subsequence only once: the tail sequence $\langle x_n:n\ge k(j)\rangle$ already converges to $\hat x$, since it’s a subsequence of a sequence converging to $\hat x$.
For a proof in case $X$ is not necessarily metric, let $F_n=(K_1\cap K_n)\setminus V$ for each $n\in\Bbb Z^+$, and work in the compact subspace $F_1$. Show that if the conclusion of the theorem fails,
each $F_n$ is compact, and
$\bigcap_{k=1}^nF_k\ne\varnothing$ for each $n\in\Bbb Z^+$,
but $\bigcap_{n\in\Bbb Z^+}F_n=\varnothing$.
Notice that this argument can be adapted to any family $\mathscr{K}$ of compact sets with non-empty intersection; it doesn’t use the countability of the family.
Added: As Rob Arthan in effect notes in the comments, you do need to assume that $X$ is a $KC$-space, meaning one in which compact sets are closed, in order to ensure that the intersection of a centred family of compact sets is non-empty; this property lies strictly between $T_1$ and $T_2$.<|endoftext|>
TITLE: Spectrum of a triangle; Beltrami operator
QUESTION [6 upvotes]: I would like to find the spectrum of a triangle (e.g. an equilateral) in using the usual Laplacian. I am not able to find some references on the subject. I'd try to solve the problem myself with a little help. Could someone give me the general idea of the resolution and how I should treat the boundary conditions of this problem?
Help would be appreciated!
REPLY [12 votes]: With fixed Dirichlet or Neumann boundary conditions, to my knowledge the only triangles with explicitly known Laplace spectra are:
equilateral,
isosceles right, and
"hemiequilateral" 30-60-90.
To compute the spectrum of the isosceles right, notice that it is the quotient of the plane by a reflection group, so its eigenfunctions are linear combinations of those of the square which satisfy the given boundary condition on the diagonal.
The equilateral triangle's Laplace spectrum (Dirichlet and Neumann) were worked out by Lame, and in a different fashion by Pinsky. They are related to the action of the equilateral triangle group on the plane. There is a compilation of papers on the equilateral triangle's spectrum by McCartin.
If you sit down and try to work out other examples that aren't the result of a reflection group acting on the plane, the mechanical computation of eigenfunctions/eigenvalues breaks down. Without some extra structure, you're going to run into difficulty in finding finite linear combinations of functions that satisfy the self-adjoint boundary conditions you've chosen.
For example, let's look at a right triangle that is not hemiequilateral or isosceles. Specify the triangle as the domain
$$ T = \{(x,y)\ |\ x\geq 0,y\geq 0,2x+y\leq 10\}.$$
Let's choose Dirichlet boundary conditions. The eigenvalue problem is now $$ u_{xx}(x,y) + u_{yy}(x,y) = \lambda u(x,y),\ \ u(0,y) = u(x,0) = u(t,10-2t) = 0 $$ for all $(x,y)\in T$. (I parametrized the line $2x+y=10$ with $t\mapsto (t,10-2t)$.)
If we try to separate variables, mechanically following the procedure for finding the spectrum of a rectangle, we get as far as imposing the first two boundary conditions: $$u(x,y) = \sum_c a_c\sin(\sqrt{\lambda}x)\sin((c-\sqrt{\lambda})y)$$
where we have a linear combination of such products of sine functions.
And now we need to find $\lambda$ such that
$$ 0 = \sum_c a_c \sin(\sqrt{\lambda}t)\sin((c-\sqrt{\lambda})(10-2t)). $$
Brick wall.
In fact, there are only a few polygonal domains whose eigenfunctions are all trigonometric: c.f. Theorem 2 of this paper, also by McCartin:
The only polygonal domains possessing
a complete set of trigonometric eigenfunctions of the form of Equation (2) [i.e., are finite linear combinations of trigonometric functions]
are those shown in Figure 1: the rectangle, the square, the isosceles right triangle, the equilateral triangle and the hemiequilateral triangle
Theorem 3 goes on to indicate which domains have some trigonometric eigenfunctions.
For more on triangles in general, this paper by Harmer explores the spectra of Euclidean and spherical triangles from the perspective of finite group actions (but does not more than touch on hyperbolic triangle groups, which are a much subtler topic).
One can still make qualitative statements regarding the spectra of triangles. For example, a paper of Hillairet-Judge proves that the Dirichlet spectrum of a generic Euclidean triangle is simple.
It is also possible to numerically study the spectra of Euclidean triangles. For instance, this paper of Berry studies "diabolical points" in the spectrum, i.e., triangles which have multiplicity. I believe it is conjectured, but not known, that when a triangle has multiplicity at a point in its spectrum, about a small neighborhood of that triangle in modulis space, the graph of the two eigenfunctions has the form of a cone.
One tool for numerically studying the spectrum of triangles is the method of particular solutions, see this paper of Betcke-Trevethen and its description of numerically computing the spectrum of domains. This method has been adopted for other domains and manifolds, particularly hyperbolic triangles and surfaces (e.g. work of Strohmaier-Uski).
References:
Pinsky, Mark A. The eigenvalues of an equilateral triangle. SIAM J. Math. Anal. 11 (1980), no. 5, 819–827. MR0586910
Pinsky, Mark A.(1-NW) Completeness of the eigenfunctions of the equilateral triangle. SIAM J. Math. Anal. 16 (1985), no. 4, 848–851. MR0793926
McCartin, Brian J. Laplacian eigenstructure of the equilateral triangle. Hikari Ltd., Ruse, 2011. x+200 pp. ISBN: 978-954-91999-6-3 MR2918422
McCartin, Brian J. On polygonal domains with trigonometric eigenfunctions of the Laplacian under Dirichlet or Neumann boundary conditions. Appl. Math. Sci. (Ruse) 2 (2008), no. 57-60, 2891–2901. MR2480444
Harmer, Mark The spectra of the spherical and Euclidean triangle groups. J. Aust. Math. Soc. 84 (2008), no. 2, 217–227. MR2437339
Hillairet, Luc; Judge, Chris. Spectral simplicity and asymptotic separation of variables. Comm. Math. Phys. 302 (2011), no. 2, 291–344. MR2770015
Berry, M. V.; Wilkinson, M. Diabolical points in the spectra of triangles. Proc. Roy. Soc. London Ser. A 392 (1984), no. 1802, 15–43. MR0738925
Betcke, Timo; Trefethen, Lloyd N. Reviving the method of particular solutions. SIAM Rev. 47 (2005), no. 3, 469–491 (electronic). MR2178637
Strohmaier, Alexander; Uski, Ville. An algorithm for the computation of eigenvalues, spectral zeta functions and zeta-determinants on hyperbolic surfaces. Comm. Math. Phys. 317 (2013), no. 3, 827–869. MR3009726<|endoftext|>
TITLE: How do I prove $\int_{-\infty}^{\infty}{\cos(x+a)\over (x+b)^2+1}dx={\pi\over e}{\cos(a-b)}$?
QUESTION [6 upvotes]: How do I prove these?
$$\int_{-\infty}^{\infty}{\sin(x+a)\over (x+b)^2+1}dx={\pi\over e}\color{blue}{\sin(a-b)}\tag1$$
$$\int_{-\infty}^{\infty}{\cos(x+a)\over (x+b)^2+1}dx={\pi\over e}\color{blue}{\cos(a-b)}\tag2$$
I am trying to apply the residue theorem to $(2)$
$$f(x)={\cos(x+a)\over (x+b)^2+1}$$
$(x+b)^2+1$=$(x+b-i)(x+b+i)$
$$2\pi{i}Res(f(x),-b-i)=2\pi{i}\lim_{x\rightarrow -b-i}{\cos(a-b-i)\over -2i}=-\pi\cos(a-b-i)$$
$$2\pi{i}Res(f(x),-b+i)=2\pi{i}\lim_{x\rightarrow -b+i}{\cos(a-b+i)\over 2i}=\pi\cos(a-b+i)$$
How do I suppose to evaluate $\cos(a-b-i)$ and $\cos(a-b+i)$?
REPLY [4 votes]: $$\int_{-\infty}^{\infty}{\cos(x+a)\over (x+b)^2+1}dx=\cos (a-b)\int_{-\infty}^{\infty}{\cos x\over x^2+1}dx-\sin (a-b)\int_{-\infty}^{\infty}{\sin x\over x^2+1}dx$$
Let $\lambda\in\mathbb{R}$, set
$$I(\lambda)=\int_{-\infty}^{\infty}{\cos(\lambda x)\over x^2+1}dx$$
we use integrate by parts, writing
$$u=\frac{1}{{{x}^{2}}+1}\quad,\quad dv=\cos (\lambda x)$$
we have
$$I(\lambda )=\frac{\sin (\lambda x)}{\lambda ({{x}^{2}}+1)}\left| \begin{matrix}
\infty \\
-\infty \\
\end{matrix} \right.+\frac{2}{\lambda }\int_{-\infty }^{+\infty }{\frac{\sin (\lambda x)}{{{({{x}^{2}}+1)}^{2}}}}\,dx
$$
as a result
$$\lambda I(\lambda )=2\int_{-\infty }^{\infty }{\frac{x\sin \lambda x}{{{({{x}^{2}}+1)}^{2}}}\,}dx \,.\quad(1)$$
By differentiate with respect $\lambda$ to get
$$\lambda \frac{dI}{d\lambda }+I(\lambda )=2\int_{-\infty }^{\infty }{\frac{{{x}^{2}}\cos \lambda x}{{{({{x}^{2}}+1)}^{2}}}\,}dx=\underbrace{2\int_{-\infty }^{\infty }{\frac{\cos \lambda x}{{{x}^{2}}+1}\,}dx}_{2I(\lambda )}-2\int_{-\infty }^{\infty }{\frac{\cos \lambda x}{{{({{x}^{2}}+1)}^{2}}}\,}dx$$
therefore
$$\lambda \frac{dI}{d\lambda }-I(\lambda )=-2\int_{-\infty }^{\infty }{\frac{\cos \lambda x}{{{({{x}^{2}}+1)}^{2}}}\,}dx$$
and
$$\lambda \frac{{{d}^{2}}I}{d{{\lambda }^{2}}}=2\int_{-\infty }^{\infty }{\frac{x\sin \lambda x}{{{({{x}^{2}}+1)}^{2}}}\,}dx.\quad(2)$$
$(1)$ and $(2)$
$$\frac{{{d}^{2}}I(\lambda)}{d{{\lambda }^{2}}}- I(\lambda )=0$$
thus
$$I(\lambda)=c_1e^{\lambda}+c_2e^{-\lambda}$$
on the other hand
\begin{align}
& I(0)={{c}_{1}}+{{c}_{2}}=\int_{-\infty }^{+\infty }{\frac{1}{{{x}^{2}}+1}}\,dx=\pi \,\,\,\,\Rightarrow \,\,{{c}_{1}}+{{c}_{2}}=\pi \, \\
& I(\lambda )=\frac{2}{\lambda }\int_{-\infty }^{+\infty }{\frac{x\sin \lambda x}{{{({{x}^{2}}+1)}^{2}}}\,\,}dx\,\,\,\Rightarrow \,\,\underset{\lambda \to \infty }{\mathop{\lim }}\,I(\lambda )=0\,\,\,\Rightarrow \,{{c}_{1}}=0 \\
\end{align}
then
$$I(\lambda )=\pi {{e}^{-\lambda }}$$
set $\lambda=1$, we have
$$\cos (a-b)\int_{-\infty}^{\infty}{\cos x\over x^2+1}dx=\frac{\pi}{e}\cos (a-b)$$
Now set
$$J(\lambda)=\int_{-\infty}^{\infty}{\sin(\lambda x)\over x^2+1}dx$$
WE repeat this produce,to get
$$J(\lambda)=c_1e^{\lambda}+c_2e^{-\lambda}$$
and
\begin{align}
& J(0)={{c}_{1}}+{{c}_{2}}=0 \\
& \underset{\lambda \to \infty }{\mathop{\lim }}\,J(\lambda )=0\Rightarrow \,{{c}_{1}}=0 \\
\end{align}
i.e. $J(\lambda)=0$ thus
$$\int_{-\infty}^{\infty}{\cos(x+a)\over(x+b)^2+1}dx=\frac{\pi}{e}\cos(a-b)$$<|endoftext|>
TITLE: What's going on with this 5-line proof of Fermat's Last Theorem?
QUESTION [18 upvotes]: I'm reading a book on the Philosophy of Mathematics, and the author gave a "5-line proof" of Fermat's Last Theorem as a way to introduce the topic of inconsistency in set theory and logic. The author acknowledges that this is not a real proof of the theorem, but the way it was presented implies that it was supposed to look somewhat convincing. I, however, have absolutely no idea how the proof given even remotely relates to FLT, and would greatly appreciate it if someone could make the connection for me. Below is an almost verbatim excerpt from the book.
Theorem: There are no positive integers $x$, $y$, and $z$, and integer $n>2$, such that $x^n + y^n = z^n$.
Proof. Let $R$ stand for the Russell set, the set of all things that are not members of themselves: $R= \{x : x \notin x\}$. It is straightforward to show that this set is both a member of itself and not a member of itself: $R \in R$ and $R \notin R$. So since $R \in R$, it follows that $R \in R$ or FLT. But since $R \notin R$, by disjunctive syllogism, FLT. End.
REPLY [7 votes]: Let $S$ be a statement which is both true and false. Since $S$ is true, $S \lor T$ is true for any statement $T.$ Since $S$ is false and $S \lor T$ is true, $T$ must be true. Thus $T$ is a true statement, i.e. every statement is true.
This is the idea that the "proof" is trying to get across. The statment that $R \in R$ is both true and false, and so using the above arguement, you can prove that anything is true, in particular FLT. Similarly, you can prove that any statement is false.<|endoftext|>
TITLE: Explanation about reduced residue system theorem
QUESTION [6 upvotes]: I need an explanation of the following theorem from Leveque's Elementary Theory of Numbers (page 44):
Theorem 3-7 If $(m,n)=1$ then $\varphi(mn)=\varphi(m)\varphi(n)$
Proof: Take integers m,n with $(m,n)=1$, and consider the numbers of the form $mx+ny$. If we can so restrict the values which x and y assume that these numbers form a reduced residue system (mod mn), there must be $\varphi(mn)$ of them. But their number is also the product of the number of values which x assumes and the number of values which y assumes. Clearly, in order for $mx+ny$ to be prime to m, it is necessary that $(m,y)=1$, and likewise we must have $(n,x)=1$. Conversely, if these last two conditions are satisfied, then $(mx+ny,mn)=1$, since in this case any prime divisor of m, or of n, divides exactly one of the two terms in$ mx+ny$. Hence let x range over a reduced residue system (mod n), say $x_1,...,x_{\varphi(n)}$, and let y run over a reduced residue system (mod m), say $y_1,...,y_{\varphi(m)}$. If for some indices $i,j,k,l$ we have
$$mx_i+ny_j \equiv mx_k + ny_l (mod\text{ } mn)$$
then
$$m(x_i-x_k)+n(y_j-y_l)\equiv 0(mod \text{ } mn)$$
Since divisibility by mn implies divisibility by m, we have
$$m(x_i-x_k)+n(y_j-y_l) \equiv 0(mod\text{ } m)$$
$$n(y_j-y_l) \equiv 0 (mod\text{ } m)$$
$$y_j \equiv y_l(mod\text{ } m)$$
whence $j =l$. Similarly, $i=k$. Thus the numbers $mx+ny$ so formed are incongruent (mod mn). Now let a be any integer prime to mn: in particular, $(a,m)=1$ and $(a,n)=1$. Then theorem 2-6 shows that there are integers X,Y (not necessarily in the chosen reduced residue systems)) such that $mX+nY=a$, whence also $mX+nY \equiv a(mod\text{ } mn)$. Since $(m,Y)=(n,X)=1$, there is an $x_i$ such that $X\equiv x_i(mod n)$ and there is a $y_j$ such that $Y \equiv y_j(mod\text{ } m)$. This means that there are integers $k,l$ such that $X=x_i+kn$, $Y=y_j+lm$. Therefore,
$$mX+nY=m(x_i+kn)+n(y_j+lm)\equiv mx_i+ny_j \equiv a(mod\text{ } mn)$$
Hence, as x and y run over fixed reduced residue systems (mod n) and (mod m), respectively, $mx+ny$ runs over a reduced residue system (mod mn), and the proof is complete.
The book was doing a very good job at explaining theorems in an easy way up to the latter. I couldn't hate that proof more. It obviates a lot of things that are hard to understand. I underlined some of things that I didn't get but there are still more. I would be glad if someone provides a simplified or better explained proof for this theorem. The first thing that I would like to understand is when they refer to x ranging over a residue system. What does that mean?. Does that mean x is congruent to some term in that specific residue system mod n?
REPLY [2 votes]: The proof works as follows. Let $\,U_m,\,U_m,\, U_{mn}$ be reduced residue systems mod $\,m,n,mn.\,$
We show that if $\,x\in U_n,\, y\in U_m$ then $\,mx+ny\in U_{mn}\,$ and this mapping yields a bijection $\, U_m \times U_n \cong U_{mn}.\,$ Comparing cardinalities we get $\,\varphi(m)\varphi(n) = \varphi(mn).\,$ Note: when we write $\,z\in U_i\,$ we mean $\,z\equiv z'\,$ for some $\,z'\in U_i$
First we show that $\,mx+ny\,$ lies in $\,U_{mn},\,$ i.e. it is coprime to $\,mn.\,$ Note
$$(mx+ny,m) = (ny,m) = (y,m) = 1\ \ {\rm by}\ \ y\in U_m$$
$$(mx+ny,n) = (mx,n) = (x,n) = 1\ \ {\rm by}\ \ x\in U_n$$
Since $\,mx+ny\,$ is coprime to $\,m,n\,$ it is coprime to their lcm = product, so it is in $U_{mn}$
Next we show the map is injective, i.e. $1$-$1$. If $\ mX+nY = mx+ny\ $ then $\, m(X-x) = n(y-Y)\,$ so by $\,m,n\,$ coprime we get $\,m\mid y-Y,\,$ $n\mid X-x,\,$ so $\, X\equiv x\pmod n,\,$ $\,Y\equiv y\pmod m$
Finally we show the map is surjective (onto). Let $\,a\in U_{mn}$. By Bezout $\, mx + ny = a\,$ for some $\,x,y.\,$ So, mod $\,n\!:\ mx\equiv a,\,$ so $\,x \equiv m^{-1}a,\,$ where $\,m^{-1}$ exists by $\,(m,n)=1.\,$ Also $\,(x,n) = 1,\,$ else $\,p\mid x,n\,$ so $\,p\mid a= mx\!+\!ny,\,$ contra $\,(a,n)=1$ by $\,(a,mn)=1.\,$ Since $\,(x,n)=1\,$ we infer $\,x\,$ is congruent to some $\,x_i\in U_n.\,$ Similarly $\,y\equiv y_i \in U_m.\,$ Your final equation shows $\,mx_1+n y_1 = m(x+jn)+n(y+kn)\equiv mx+ny\equiv a\pmod{mn},\,$ so $\,(x_i,y_j)\in U_n\times U_m\,$ maps to $\,a\in U_{mn},\,$ so the map is onto.
Remark $\ $ The proof will become much clearer when you learn about the ring-theoretic view of CRT = Chinese Remainder Theorem. Then it follows simply from
$$\Bbb Z/mn\, \cong\, \Bbb Z/m \oplus \Bbb Z/n\ \Rightarrow\ U(\Bbb Z/mn)\, \cong\, U(\Bbb Z/m) \times U( \Bbb Z/n) $$
where $\,U(R)\,$ denotes the group of units (invertibles) of $R$.<|endoftext|>
TITLE: $\|\hat{f} \|_{\infty} = \lim _ {n \rightarrow \infty} (\|f^{(n)}\|_1)^{1/n}$
QUESTION [9 upvotes]: Let $f \in L^2 \cap L^1$ on the Real line, and define $f^{(n)}$ to be the $n$-fold convolution $f \circ f ... \circ f $.
I want to show that $||\hat{f} ||_{\infty} = \lim _ {n \rightarrow \infty} (||f^{(n)}||_1)^{1/n}$, using the tools of Fourier analysis on $L_1$ and $L_2$.
And actually I'm only stuck on the fact that the RHS $\le$ LHS.
A formal proof would be something like this, but I'm stuck on technicalities:
\begin{align}\lim_n (\|f^{(n)}\|_1)^{1/n} &= \lim_n [\int f^{(n)} \overline{\exp{ (i \arg f^{(n)})}}]^{1/n}
= \lim_n \left\langle f^{(n)}, \exp{ (i \arg f^{(n)})}\right\rangle ^{1/n}\\
& = \lim_n \left\langle\widehat{f^{(n)}}, \widehat{\exp{ (i \arg f^{(n)})}}\right\rangle^{1/n}
= \lim_n \left\langle{\hat{f}^n}, \widehat{\exp{ (i \arg f^{(n)})}}\right\rangle^{1/n} \\
& = \lim_n \left[\int \hat{f}^n \overline{\widehat{\exp{ (i \arg f^{(n)})}}}\right]^{1/n}
\le \lim_n \left[\| \hat{f} \|^n _\infty\int \overline{\widehat{\exp{ (i \arg f^{(n)})}}}\right]^{1/n}
\le \|\hat{f}\|_\infty
\end{align}
Trouble is, $\exp{ (i \arg f^{(n)})}$ is not integrable since its magnitude is always 1. I have tried to do an approach where I insert $g_k$ where $g_k$ is a compact smooth "hill" function which becomes wider and wider and limits to $1$, and this allows me to arrive at
\begin{align}
\lim_n (||f^n||_1)^{1/n} &= \lim_n \lim_k [\int f^{(n)} \overline{g_k \exp{ (i \arg f^{(n)})}}]^{1/n} \\
&\le \lim_n \lim_k \left[\| \hat{f} \|^n _\infty\int \overline{\widehat{g_k \exp{ (i \arg f^{(n)})}}}\right]^{1/n}\\
& \le \| \hat{f} \|_\infty \lim_n \lim_k \int \overline{\widehat{g_k \exp{ (i \arg f^{(n)})}}}]^{1/n}
\end{align}
But I can't actually take the limit $k$ because then the term will go to infinity. I thought of making $k$ a function of $n$ but then I couldn't show that this doesn't change the limit.
This strategy is taken from an analogous proof on the periodic circle with discrete Fourier transform, and I would like to see if it can be fixed somehow (because this was the hint given by the text).
REPLY [5 votes]: @Romain and @Glitch pointed errors or gaps in an answer I posted some time ago. As @David points out, the result is true, even without the assumption the $f \in L^2(\mathbb{R})$. I cannot fix my proof, hence I am deleting it.
If someone adds a proof later on, I will completely remove my post.<|endoftext|>
TITLE: Why is some power of a permutation matrix always the identity?
QUESTION [12 upvotes]: If you take powers of a permutation, why is some
$$
P^k = I
$$
Find a 5 by 5 permutation
$$
P
$$
so that the smallest power to equal I is
$$
P^6 = I
$$
(This is a challenge question, Combine a 2 by 2 block with a 3 by 3 block.)
I couldn't solve the question anyway, but what does 2 by 2 block mean? Is block another way of saying matrix? Thanks
REPLY [15 votes]: There are only finitely many ways to permute finitely many things. So in the sequence
$$P^1,\ P^2,\ P^3,\ldots$$
of powers of a permutation $P$, there must eventually be two powers that give the same permutation, meaning that $P^i=P^j$ for some $i>j\geq0$. Permutations are reversible so $P$ is invertible, hence
$$P^{i-j}=P^iP^{-j}=P^j(P^j)^{-1}=I.$$
And yes, a $2\times2$-block means a $2\times2$-matrix here. The hint suggest to choose a $5\times5$-matrix that has a $2\times2$-matrix and a $3\times3$-matrix on its diagonal, and zeroes elsewhere.
REPLY [11 votes]: Ok, here you go: Note that in a finite group every element has finite order, see for example here for a proof. This means in a finite group $G$ you can find a $n\in \mathbb{N}$ for every $g\in G$ s.t. $g^n=e$.
Now you have a group homomorphism $\varphi:S_n\to Gl_n$ via the following map: take the standardbasis $e_1,\ldots,e_n$ and an element $\sigma \in S_n$, then $\varphi(\sigma)=(e_{\sigma(1)},\ldots,e_{\sigma(n)})$, here I mean the matrix spanned by this vectors. You should check that this is indeed a group morphism.
For a group morphism you have that $\varphi(\sigma)^n=\varphi(\sigma^n)$ and since $S_n$ is a finite group you find a $n \in \mathbb{N}$ s.t. $\varphi(\sigma)^n=\varphi(\sigma^n)=\varphi(id_{S_n})=id_{Gl_n}$.
Now for your last part I suggest you try the matrix that is associated to $(123)(45)$ under the above group morphism.<|endoftext|>
TITLE: If $x,y,z\gt 0$ and $xyz=1$ Then minimum value of $\frac{x^2}{y+z}+\frac{y^2}{z+x}+\frac{z^2}{x+y}$
QUESTION [5 upvotes]: If $x,y,z\gt 0$ and $xyz=1$ Then find the minimum value of $\displaystyle \frac{x^2}{y+z}+\frac{y^2}{z+x}+\frac{z^2}{x+y}$
$\bf{My\; Try::}$Using Titu's Lemma $$\frac{x^2}{y+z}+\frac{y^2}{z+x}+\frac{z^2}{x+y}\ge \frac{(x+y+z)^2}{2(x+y+z)} = \frac{x+y+z}{2}\ge 3\frac{\sqrt[3]{xyz}}{2} = \frac{3}{2}$$
and equality holds when $$x=y=z=1$$
My question is how can we solve it without the above lemma, like using Jensen's Inequality or other inequality.
Please explain me.
Thanks
REPLY [3 votes]: This answer shows how Nesbitt's inequality can used with other proof techniques to prove the given inequality.
Nesbitt's inequality
$$\frac{x}{y+z}+\frac{y}{z+x}+\frac{z}{x+y} \geq \frac{3}{2}\tag{$*$}$$
$(1a)$ Observe that
$$\sum_{cyc}\left(\frac{x^2}{y+z}\right)+(x+y+z)=(x+y+z)\left(\frac{x}{y+z}+\frac{y}{z+x}+\frac{z}{x+y}\right)\tag{A}$$
Using $(*)$ and (A), we get
$$\sum_{cyc}\left(\frac{x^2}{y+z}\right) \geq \frac{(x+y+z)}{2}\tag{B}$$
Now, using AM-GM inequality, we get
$$\frac{(x+y+z)}{2} \geq \frac{3\sqrt[3]{xyz}}{2}=\frac{3}{2}\tag{C}$$
Using (B) and (C), we get the desired result.
$(1b)$ Observe that
$$\sum_{cyc}\left(\frac{x^2}{y+z}\right)=(x+y+z)\left[\left(\sum_{cyc}\frac{x}{y+z}\right)-1\right]\tag{D}$$
Using $(*)$ and (C) in (D), we get
$$\sum_{cyc}\left(\frac{x^2}{y+z}\right) \geq 3 \cdot \left(\frac{3}{2}-1\right)=\frac{3}{2}$$
$(2)$
Without loss of generality, let $x \geq y \geq z$
Therefore, $$\frac{x}{y+z} \geq \frac{y}{x+z} \geq \frac{z}{x+y}$$
Now, using the Chebyshev inequality for these two increasing sequences, we get
$$3 \sum_{cyc}\left(\frac{x^2}{y+z}\right) \geq (x+y+z)\sum_{cyc}\left(\frac{x}{y+z}\right)\tag{E}$$
Using $(*)$ and (C) in (E), we get the required result.
$(3)$ Let $f(x)=x^2$
Now, we can write
$$\sum_{cyc}\left(\frac{x^2}{y+z}\right)=\sum_{cyc}\left(\frac{f(x)}{y+z}\right)$$
Note, that $f''(x)=2 > 0$, so the function is convex.
Now, using the Weighted-Jensen inequality, we get
$$\sum_{cyc}\left(\frac{f(x)}{y+z}\right) \geq 3f(M)\tag{F}$$
where using $(*)$, we get
$$3M=\sum_{cyc}\left(\frac{x}{y+z}\right) \geq \frac{3}{2} \Longleftrightarrow M \geq \frac{1}{2}\tag{G}$$
Using (G) in (F), we get the desired result.
Note: Notice that this method is independent of the constraint $xyz=1$
Finally, note that equality is indeed attained when $\boxed{x=y=z=1}$<|endoftext|>
TITLE: Groups of order $25$
QUESTION [6 upvotes]: Please verify my solution that there are only two groups of order $25$ up to isomorphism.
As $|G|$ is a prime squared, then $G$ is abelian.
Since the Theorem of Finite Abelian Groups, $G$ is a direct product of cyclic groups. The only possibilities here, since $25=5.5$, are $G = \mathbb{Z_{25}}$ or $G = \mathbb{Z_5} \times \mathbb{Z_5}$. Note that there is no element or order $25$ in the latter, so they're not isomorphic.
REPLY [4 votes]: Your solution is correct,here is a solution without using the Structure theorem.
Proof: Suppose $G$ is cyclic then $G \cong \frac {\mathbb Z}{25 \mathbb Z}$. So assume that $G$ is not cyclic and $G$ is abelian (as you mentioned in your solution).Now note that we have a group action $\frac {\mathbb Z}{5 \mathbb Z} \times G \to G$ defined as $(a,g) \to g^a$ (prove yourself that its actually a group action!).Hence $G$ is a vector space of dimension $2$ over $\frac {\mathbb Z}{5 \mathbb Z}$,therefore is isomorphic to $\frac {\mathbb Z}{5 \mathbb Z} \times \frac {\mathbb Z}{5 \mathbb Z}$. QED<|endoftext|>
TITLE: Monotone Class Theorem and another similar theorem.
QUESTION [8 upvotes]: I found different statements of the Monotone Class Theorem.
On probability Essentials (Jean Jacod and Philip Protter) the Monotone Class Theorem (Theorem 6.2, page 36) is stated as follows:
Let $\mathcal{C}$ be a class of subsets of $\Omega$ under finite intersections and containing $\Omega$. Let $\mathcal{B}$ be the smallest class containing $\mathcal{C}$ which is closed under increasing limits and by difference. Then $\mathcal{B} = \sigma ( \mathcal{C})$.
While on Wikipedia (https://en.wikipedia.org/wiki/Monotone_class_theorem) the theorem is:
Let $G$ be an algebra of sets and define $M(G)$ to be the smallest monotone class containing $G$. Then $M(G)$ is precisely the $\sigma$-algebra generated by $G$, i.e. $\sigma(G) = M(G)$.
Where a monotone class in a set $R$ is a collection $M$ of subsets of $R$ which contains $R$ and is closed under countable monotone unions and intersections.
It looks like the second theorem should be a special case of the first. Does the first prove the second? Is it possible to prove the first from the second? Is there a decent literature on those two theorems?
REPLY [4 votes]: Both results are actually equivalent. You can prove one from the other.
Regarding the first result:
Let $\mathcal{C}$ be a class of subsets of $\Omega$ closed under finite intersections and containing $\Omega$. Let $\mathcal{B}$ be the smallest class containing $\mathcal{C}$ which is closed under increasing limits and by difference. Then $\mathcal{B} = \sigma ( \mathcal{C})$.
Some books call it "Monotone Class Theorem", although this is not the most usual naming.
A class having $\Omega$, closed under increasing limits and by difference is called a "Dynkin $\lambda$ system". A non-empty class closed under finite intersections is called a "Dynkin $\pi$ system".
The result above can be divided in two results
1.a. A $\lambda$ system which is also a $\pi$ system is a $\sigma$-algebra.
1.b. Given a $\pi$ system, the smallest $\lambda$ system containing it is also a $\pi$ system.
Some books call result 1.a (or result 1.b) "Dynkin $\pi$-$\lambda$ Theorem.
Some quick references is
https://en.wikipedia.org/wiki/Dynkin_system
The second result
Let $G$ be an algebra of sets and define $M(G)$ to be the smallest monotone class containing $G$. Then $M(G)$ is precisely the $\sigma$-algebra generated by $G$, i.e. $\sigma(G) = M(G)$.
Where a monotone class in a set $R$ is a collection $M$ of subsets of $R$ which contains $R$ and is closed under countable monotone unions and intersections.
is usually called "Monotone Class Lemma" (or theorem) you can find it in books like Folland's Real Analysis or Halmos' Measure Theory. In fact, Halmos presents a version of this result for $\sigma$-rings.
Let $G$ be ring of sets and define $M(G)$ to be the smallest monotone class containing $G$. Then $M(G)$ is precisely the $\sigma$-ring generated by $G$.
Let us prove that the results are equivalent
Result 1: Let $\mathcal{C}$ be a class of subsets of $\Omega$ closed under finite intersections and containing $\Omega$. Let $L(\mathcal{C})$ be the smallest class containing $\mathcal{C}$ which is closed under increasing limits and by difference. Then $L(\mathcal{C}) = \sigma ( \mathcal{C})$.
Result 2: Let $G$ be an algebra of sets and define $M(G)$ to be the smallest monotone class containing $G$. Then $M(G)$ is precisely the $\sigma$-algebra generated by $G$, i.e. $\sigma(G) = M(G)$.
Where a monotone class in a set $R$ is a collection $M$ of subsets of $R$ which contains $R$ and is closed under countable monotone unions and intersections.
Proof:
(2 $\Rightarrow$ 1). Note that any class containing $\mathcal{C}$ which is closed under increasing limits and by difference is close by complement because $\Omega \in \mathcal{C}$, and so it is also closed by decreasing limits. So it is closed under countable monotone unions and intersections. It means: any class containing $\mathcal{C}$ which is closed under increasing limits and by difference is a monotone class.
Note also that any class containing $\mathcal{C}$ which is closed under increasing limits and by difference contains $A(\mathcal{C})$ the algebra generated by $\mathcal{C}$.
Then using Result 2 we have
$$ \sigma(\mathcal{C}) = \sigma(A(\mathcal{C})) = M(A(\mathcal{C})) \subseteq L(A(\mathcal{C}))=L(\mathcal{C}) $$
Since $\sigma(\mathcal{C})$ is a class containing $\mathcal{C}$ which is closed under increasing limits and by difference, we have $L(\mathcal{C}) \subseteq \sigma(\mathcal{C})$, so $L(\mathcal{C}) = \sigma(\mathcal{C})$.
(1 $\Rightarrow$ 2). First let us prove that $M(G)$ is a class containing $G$ which is closed under increasing limits and by difference. Since $M(G)$ is monotone, we have that $M(G)$ is closed under increasing limits.
Now, for each $E\in M(G)$, define
$$M_E=\{ F \in M(G) : E\setminus F , F \setminus E \in M(G) \}$$
Since $M(G)$ is a monotone class, $M_E$ is a monotone class. Moreover, if $E\in G$ then for all $F \in G$, $F\in M_E$, because $G$ is an algebra. So, if $E\in G$, $G \subset M_E$. So, if $E\in G$, $M(G) \subset M_E$. It means that for all $E\in G$, and all $F \in M(G)$, $F \in M_E$. So, for all $E\in G$, and all $F \in M(G)$, $E \in M_F$. So, for all $F \in M(G)$, $G \subset M_F$, but since
$M_F$ is a monotone class, we have, for all $F \in M(G)$, $M(G)\subset M_F$. But that means that $M(G)$ is closed by differences.
So we proved that $M(G)$ is a class containing $G$ which is closed under increasing limits and by difference.
So by Result 1, $$\sigma(G)=L(G) \subseteq M(G)$$
Since $\sigma(G)$ is a monotone class, we have
$$ M(G) \subseteq \sigma(G)$$
So we have $$\sigma(G)= M(G)$$<|endoftext|>
TITLE: Show that $U(8)$ is Isomorphic to $U(12)$.
QUESTION [6 upvotes]: Question: Show that $U(8)$ is Isomorphic to $U(12)$
The groups are:
$U\left ( 8 \right )=\left \{ 1,3,5,7 \right \}$
$U\left ( 12 \right )=\left \{ 1,5,7,11 \right \}$
I think there is a bit of subtle point that I am not fully understanding about isomorphism which is hindering my progress. The solution mentions about the order of an element but I do not understand how that is pivotal to solving this.
Thanks in advance.
REPLY [2 votes]: The preferred approach would be to explicitly construct that isomorphism, but since the problem suggests basing the proof around the concept of "order of an element", let us do so then.
The crucial thing here is that both groups are of order $4$, and a simple theorem says that there exist only two classes of groups of order $4$: those isomorphic to $\Bbb Z_2 \times \Bbb Z_2$ (also known as "the Klein group") and those isomorphic to $\Bbb Z_4$. Notice that in $\Bbb Z_2 \times \Bbb Z_2$ all the elements are of order $2$, while in $\Bbb Z_4$ there are elements of order $4$.
It is now a matter of simple calculations to check that in your groups all the elements have order $2$, therefore they cannot be isomorphic to $\Bbb Z_4$, hence by the above considerations they must be isomorphic to $\Bbb Z_2 \times \Bbb Z_2$, hence isomorphic between themselves. (Notice that we don't have an explicit formula for this isomorphism - but neither are we required to find it.)<|endoftext|>
TITLE: Cartesian closed subcategories of compact Hausdorff topological spaces?
QUESTION [5 upvotes]: The category of compact Hausdorff topological spaces is famously not cartesian closed. I was wondering how much more one has to assume to actually arrive to a cartesian closed category.
For example, if we further assume total disconnectedness, i.e we end up with the category of Stone spaces, do we get cartesian closedness or still not?
REPLY [2 votes]: Stone spaces are not cartesian closed, and I don't know of any interesting subcategory of compact Hausdorff spaces that is. Typically, you get an interesting cartesian closed category related to compact Hausdorff spaces by allowing more spaces, not less: the problem is not that your spaces are too general, but simply that the natural topology on the set of all continuous maps between two spaces is almost never compact, so to have a mapping object you need to allow some non-compact spaces.
Here's a proof that Stone spaces aren't cartesian closed. By Stone duality, if they were, then Boolean algebras would be cocartesian coclosed. In particular, coproducts (aka tensor products) of Boolean algebras would distribute over arbitrary products. This is false: if $B$ is an infinite Boolean algebra and $(A_i)$ is an infinite family of nontrivial Boolean algebras, the canonical map $B\otimes \prod A_i\to \prod B\otimes A_i$ is not surjective. (You can prove this, for instance, by noting that any element of $B\otimes \prod A_i$ comes from $B_0\otimes \prod A_i$ for some finite subalgebra $B_0\subset B$.)<|endoftext|>
TITLE: Additive rotation matrices
QUESTION [8 upvotes]: Let's assume that we want to find a rotation matrix which added to a given rotation matrix gives also a rotation matrix. I would name such matrix a rotation additive matrix for a given rotation matrix.
First consider a 2D case for identity matrix. It is relatively easy to find such matrix.
$
R= \begin{bmatrix}
-\dfrac{1}{2} & -\dfrac {\sqrt{3}}{2} \\
\dfrac{\sqrt{3}}{2} & -\dfrac{1}{2} \\
\end{bmatrix}
$
Really we have
$
\begin{bmatrix}
-\dfrac{1}{2} & -\dfrac { \sqrt{3}}{2} \\
\dfrac{ \sqrt{3}}{2} & -\dfrac{1}{2} \\
\end{bmatrix} + \begin{bmatrix}
1 & 0 \\
0 & 1 \\
\end{bmatrix} = \begin{bmatrix}
\dfrac{1}{2} & -\dfrac { \sqrt{3}}{2} \\
\dfrac{\sqrt{3}}{2} & \dfrac{1}{2} \\
\end{bmatrix}
$
Also symmetrical matrix to R is additive for identity matrix, so we have at least 2 such matrices.
If it exists for identity matrix should, I believe, exist for other 2D rotation matrices.
I was searching also for a such matrices in 3D. However without positive effects.
Question
Do such matrices exist in 3D ?
If so how to find them.
If not how to prove it.
REPLY [19 votes]: There are no such 3D rotations.
Assume contrariwise that for certain rotations $R_1,R_2,R_3$ the equation
$$
R_1\vec{x}+R_2\vec{x}=R_3\vec{x}\qquad(*)
$$
holds for all $\vec{x}\in\Bbb{R}^3$. If this works for the triple $(R_1,R_2,R_3)$ then multiplying $(*)$ from the left by $R_3^{-1}$ we see that it also works for the triple $(R_3^{-1}R_1,R_3^{-1}R_2,I_3)$. So without loss of generality we can assume that $R_3$ is the identity mapping.
But $R_1$ has an axis (or $\lambda=1$ is one of its eigenvalues), so there exists a non-zero vector $\vec{u}$ such that $R_1\vec{u}=\vec{u}$. Plugging in $\vec{x}=\vec{u}$ shows that $R_2\vec{u}=\vec{0}$. This is impossible, because
as a rotation $R_2$ is non-singular.<|endoftext|>
TITLE: How to derive this Hankel's Contour integral formula with gamma function?
QUESTION [7 upvotes]: This relation was put up in The Art Of Computer Programming and no derivation was offered. Please help me understand this better.
$$\frac{1}{\Gamma (z)} = \frac{1}{2i\pi} \oint\frac{e^t dt}{t^z}$$
It said that the path of the complex integration starts at $-\infty$ circles around the origin and returns to $-\infty$.
If not a derivation, at least help me develop some intuition about this. How are we introducing complex analysis to a function that came up in the real numbers ?
REPLY [13 votes]: Ok, i don't want to be a jerk, so here we go. Let's define the complex valued function
$$
f(z)=\frac{e^{z}}{z^x}
$$
We integrate $f(z)$ over the so called Hankel contour $\mathcal{H}$ which consists of three parts ($\delta \rightarrow 0_+$):
-the line segment $[-\infty+i\delta,i\delta]$, denoted by $l_+$
-a small semicircle around the origin with radius $\delta$, denoted by $sc$
-the line segment $[-\infty-i\delta,-i\delta]$, denoted by $l_-$
Note that this contour fixes the branch cut of log to lie on the negative real axis.
It is not difficult to show that $\int_{sc}f(z)dz$ yields a zero contribution as long as $z>-1$ so we end up,using Cauchy's integral theorem, with
$$
\int_{\mathcal{H}} f(z)dz=-\int_{l+}f(z)dz-\int_{l-}f(z)dz
$$
now noting that $\lim_{\delta\rightarrow 0}f(t\pm i\delta)=e^{\mp i \pi x }\frac{e^{t}}{|t|^x}$ we obtain
$$
\int_{\mathcal{H}} f(z)dz =2i\sin(\pi x)\int_{-\infty}^0\frac{e^t}{|t|^x}dt=2i\sin(\pi x)\int_0^{\infty}\frac{e^{-t}}{t^x}dt
$$
the last integral is the standard integral representation of the gamma function
$$
\int_{\mathcal{H}} f(z)dz=2i \sin(\pi x)\Gamma(1-x)
$$
thanks to the well known multiplicative property of the gamma function this equals
$$
\int_{\mathcal{H}} f(z)dz=\frac{2i\pi}{\Gamma(x)}
$$
QED<|endoftext|>
TITLE: Is a surjective mapping of R2 to itself with full rank derivative everywhere necessarily injective?
QUESTION [6 upvotes]: If $f:\mathbb R^2\rightarrow\mathbb R^2$ has rank 2 derivative everywhere, then by the inverse function theorem it is locally injective. If it is surjective, is it then necessarily globally injective as well?
What if we consider the same case, but for a map of $\mathbb R^{2+}$ to itself (i.e. ($x\geq 0,y\geq 0$)?
REPLY [7 votes]: No, $f$ needn't be globally injective. A counterexample is $$f:\mathbb C\to \mathbb C:z\mapsto \int_0^ze^{t^2}dt$$
Why is that entire map $f$ surjective?
Because by Picard's theorem it could at most skip one value $b\in \mathbb C$ i.e. $f(\mathbb C)=\mathbb C\setminus \{b\}$.
Of course that potential $b$ is nonzero since $f(0)=0$.
But since $f(-z)=-f(z)$, if it skipped $b$ it would also skip $-b$ so that actually $b$ does not exist and $f$ is indeed surjective.
Why is that entire map $f$ not injective ?
Because it would then be bijective and the only holomorphic bijections $u:\mathbb C\to \mathbb C$ are of the form $u(z)=az+b$ and have constant derivative $u'(z)=a$.
However $f'(z)=e^{z^2}$ is not constant.
Why does $f$ have maximal real rank everywhere?
Because $f'(z)=e^{z^2}$ never vanishes.
Nota Bene
I have used that $\mathbb C$ is just $\mathbb R^2$, endowed with its well-known supplementary complex structure.<|endoftext|>
TITLE: Convexity of difference of log-sum-exp: $f(x_1, x_2, x_3, x_4) = \log(e^{x_1} + e^{x_2}) - \log(e^{x_1} + e^{x_2} + e^{x_3} + e^{x_4})$
QUESTION [16 upvotes]: I would like to know whether the following function $f: \mathbf{R}^4 \to \mathbf{R}$ is concave or not:
$$
f(x_1, x_2, x_3, x_4) = \log(e^{x_1} + e^{x_2}) - \log(e^{x_1} + e^{x_2} + e^{x_3} + e^{x_4})
$$
I tried to check whether the Hessian was negative semi-definite, but did not get anywhere. The Hessian can be written as
$$
\nabla^2 f(x) =
\frac{1}{\tilde{Z}^2}(\tilde{Z} \cdot \text{diag}(\tilde{\mathbf{z}}) - \tilde{\mathbf{z}}\tilde{\mathbf{z}}^\intercal)
-
\frac{1}{Z^2}(Z \cdot \text{diag}(\mathbf{z}) - \mathbf{z} \mathbf{z}^\intercal),
$$
where
$$
\tilde{\mathbf{z}} = \begin{bmatrix} e^{x_1} & e^{x_2} & 0 & 0 \end{bmatrix}^\intercal \\
\mathbf{z} = \begin{bmatrix} e^{x_1} & e^{x_2} & e^{x_3} & e^{x_4} \end{bmatrix}^\intercal \\
\tilde{Z} = e^{x_1} + e^{x_2} \\
Z = e^{x_1} + e^{x_2} + e^{x_3} + e^{x_4}
$$
but I did not get much further than that. Any help would be greatly appreciated!
This question discusses in detail the convexity of the log-sum-exp function, but does not apply to my case (difference of fcts).
REPLY [7 votes]: (This answer builds up on the first comment to the question above).
Short answer: no, the function is not concave.
Instead of analyzing the full $4 \times 4$ hessian, we can start by restricting our attention to a subspace of the input, e.g., the line induced by setting $x_2 = x_3 = x_4 = 0$. Along this line, the original function can be rewritten as a univariate function $\tilde{f}(x) = \log(e^x + 1) - \log(e^x + 3)$.
the second derivative of $\tilde{f}$ is
$$
\frac{d^2\tilde{f}}{dx^2} = \frac{e^x(6 - 2 e^{2x})}{(e^x + 1)^2 (e^x + 3)^2}
$$
it is easy to see that the second derivative is positive for $x = 0$, hence $\tilde{f}$ is not concave (furthemore, it is negative for $x = 1$, so it is not convex either).
We conclude that $f$ is not concave.
Note that the type of functions described here (difference of log-sum-exps) appears in the log-likelihood function of certain statistical models of paired comparisons, such as Elimination by Aspects and team comparisons.<|endoftext|>
TITLE: Natural isomorphism $\tilde H_i(X) \xrightarrow{\cong} \tilde H_{i+1}(\Sigma X)$ where $\Sigma X$ is the suspension of $X$.
QUESTION [8 upvotes]: Define $\Sigma X$ to be the quotient space of $[-1,1]\times X$ obtained by identifying ${0}\times X$ and ${1}\times X$ to two points respectively. For any homology theory (satisfying Eilenberg-Steenrod axioms), I am able to find an isomorphism $\tilde H_i(X) \rightarrow \tilde H_{i+1}(\Sigma X)$ as follows:
Denote $C_+ X=[0,1]\times X /\tilde{}$ and $C_- X=[-1,0]\times X /\tilde{}$, then we know both of them are contractible and $\Sigma X = C_+X \cup C_- X$. First we consider the reduced homology sequence for the pair $(\Sigma X, C_+ X)$:
$$
0=\tilde H_i(C_+ X) \to \tilde H_i(\Sigma X) \to H_i(\Sigma X,C_+ X) \to \tilde H_{i-1}(C_+ X)=0
$$
Hence we know $$\tilde H_i(\Sigma X) \to^{\cong} H_i(\Sigma X,C_+ X)$$ is an isomorphism. Second, by considering the reduced homology sequence for the pair $(C_- X, X)$(where X is identified with the quotient image of $\{0\}\times X$), we can similarly get $$
H_i(C_-X,X)\to^{\cong} \tilde H_{i-1}(X)
$$
Finally, using excision axiom and homotopy axiom we can show that
$$
H_i(C_-X,X) \cong H_i(\Sigma X,C_+ X)
$$
Nevertheless, I have no idea how to show this isomorphism is also
"natural". Here "natural" means that, if we denote this isomorphism
by $\Phi: \tilde H_i(X) \to \tilde H_{i+1}(\Sigma X)$, then, for any
map $f:X\to Y$ and its suspension $\Sigma f: \Sigma X \to \Sigma Y$,
$\Phi f_* = (\Sigma f)_* \Phi$.
REPLY [6 votes]: All the constructions that you used to define the isomorphism are natural/functorial:
Given a map $X \to Y$, you have a natural map that respect inclusions, which gives a starting point for all the applications of naturality to come:
$$(\Sigma X, C_+ X, C_- X, X \times \{0\}) \to (\Sigma Y, C_+ Y, C_- Y, Y \times \{0\});$$
The long exact sequence of a pair is natural, hence by using the natural map $(\Sigma X, C_+ X) \to (\Sigma Y, C_+ Y)$, the isomorphism $\tilde{H}_i(\Sigma X) \to \tilde{H}_i(\Sigma X, C_+ X)$ is natural in $X$;
Excision is natural, hence the excision isomorphism $\tilde{H}_i(C_- X, X) \to \tilde{H}_i(\Sigma X, C_+ X)$ is natural in $X$;
Finally the long exact sequence is still natural, hence the isomorphism $\tilde{H}_i(C_- X, X) \to \tilde{H}_{i-1}(X)$ is natural in $X$.
In conclusion, each subsquare in the following diagram is commutative, hence the "outer" rectangle (by inverting the horizontal arrows that go the wrong way, the composite of the whole thing is the suspension isomorphism) is commutative:
$$\require{AMScd}
\begin{CD}
\tilde{H}_i(\Sigma X) @>{\cong}>> \tilde{H}_i(\Sigma X, C_+ X) @<{\cong}<< \tilde{H}_i(C_- X, X) @>{\cong}>> \tilde{H}_{i-1}(X) \\
@VVV @VVV @VVV @VVV \\
\tilde{H}_i(\Sigma Y) @>{\cong}>> \tilde{H}_i(\Sigma Y, C_+ Y) @<{\cong}<< \tilde{H}_i(C_- Y, Y) @>{\cong}>> \tilde{H}_{i-1}(Y)
\end{CD}$$
tl;dr The composition of two functors is a functor.<|endoftext|>
TITLE: On the conjectured nonexistence of even almost perfect numbers (other than powers of two) and odd perfect numbers
QUESTION [5 upvotes]: (Note: This question has been cross-posted from MO.)
Let $\sigma(a) = \sigma_{1}(a)$ be the sum of the divisors of the positive integer $a$.
A number $M$ is called almost perfect if $\sigma(M) = 2M - 1$. $N$ is called perfect if $\sigma(N) = 2N$.
$M_s = 2^s$ where $s \geq 0$ are almost perfect, with $M_0 = 1$ being the only odd almost perfect number that is currently known. If $M \neq 2^t$ is an even almost perfect number, then Antalan and Tagle showed that $M$ must have the form
$$M = {b^2}{2^r}$$
where $r \geq 1$ and $b$ is an odd composite.
On the other hand, only $49$ even perfect numbers have been discovered, and they are of the form
$$N_e = 2^{p-1}\left(2^p - 1\right)$$
where $2^p - 1$ and $p$ are primes. It is currently unknown whether there are any odd perfect numbers, but Euler showed that they are of the form
$$N_o = n^2 q^c$$
where $q$ is prime with $q \equiv c \equiv 1 \pmod 4$ and $\gcd(q,n) = 1$.
Notice the following:
For numbers that we know exist
Even Perfect Numbers
Assuming $N_e \neq 6$ (because it is squarefree),
$$\dfrac{\sigma(2^{(p-1)/2})}{2^p - 1} = \dfrac{2^{(p+1)/2} - 1}{2^p - 1}< 1 < 4 \le 2^{(p+1)/2} = \dfrac{\sigma(2^p - 1)}{2^{(p-1)/2}}.$$
Note that
$$\dfrac{\sigma(2^p - 1)}{2^p - 1} \leq \dfrac{8}{7} < \sqrt{\dfrac{7}{4}} < \dfrac{\sigma(2^{(p-1)/2})}{2^{(p-1)/2}}.$$
Almost Perfect Numbers (Powers of Two)
Since $s \geq 0$,
$$\dfrac{\sigma(\sqrt{1})}{2^s} \leq 1 \leq 2^{s+1} - 1 = \dfrac{\sigma(2^s)}{\sqrt{1}}.$$
Note that
$$\dfrac{\sigma(\sqrt{1})}{\sqrt{1}} = 1 \leq \dfrac{\sigma(2^s)}{2^s}.$$
Spoof Odd Perfect Numbers
In Descartes' example $D = km$, we have
$$m = 22021 = {{19}^2}\cdot{61}$$
and
$$\sqrt{k} = {3}\cdot{7}\cdot{11}\cdot{13}.$$
Note that
$$\dfrac{\sigma(\sqrt{k})}{m} = \dfrac{5376}{22021} < 1 < \dfrac{22022}{3003} < \dfrac{m+1}{\sqrt{k}}$$
and
$$\dfrac{m+1}{m} = \dfrac{22022}{22021} < \dfrac{5376}{3003} = \dfrac{\sigma(\sqrt{k})}{\sqrt{k}}.$$
For numbers that are conjectured not to exist
Even Almost Perfect Numbers (Other Than Powers of Two)
$$\dfrac{\sigma(2^r)}{b} < 1 < 2 < \dfrac{\sigma(b)}{2^r}$$
Note that
$$\dfrac{\sigma(b)}{b} < \dfrac{4}{3} < \dfrac{3}{2} \leq \dfrac{\sigma(2^r)}{2^r}.$$
"Some New Results On Even Almost Perfect Numbers Which Are Not Powers Of Two" - Theorem 2.2, page 5
Odd Perfect Numbers
The following inequalities are conjectured in New Results for Sorli's Conjecture on Odd Perfect Numbers - Part II:
$$\dfrac{\sigma(q^c)}{n} < 1 < \sqrt{\dfrac{8}{5}} < \dfrac{\sigma(n)}{q^c}.$$
Note that
$$\dfrac{\sigma(q^c)}{q^c} < \dfrac{5}{4} < \sqrt{\dfrac{8}{5}} < \dfrac{\sigma(n)}{n}.$$
(Added June 27 2016 - In fact, in a recent preprint, Brown claims a partial proof for $q^c < n$, which would be consistent with the conjecture here.)
Here are my questions:
(1) If $K = {x^2}{y^z}$ is a (hypothetical) number satisfying $\sigma(K) = 2K + \alpha$ (with $y$ prime, $\gcd(x,y)=1$, and where $\alpha$ could be zero or negative), might there be a specific reason why the inequalities
$$\dfrac{\sigma(x)}{y^z} \leq 1 \leq \dfrac{\sigma(y^z)}{x}$$
seem to guarantee existence of such numbers $K$?
(2) If $L = {u^2}{v^w}$ is a (hypothetical) number satisfying $\sigma(L) = 2L + \beta$ (with $v$ prime, $\gcd(u,v)=1$ and where $\beta$ could be zero or negative), might there be a specific reason why the inequalities
$$\dfrac{\sigma(v^w)}{u} < 1 < \dfrac{\sigma(u)}{v^w}$$
seem to predict nonexistence of such numbers $L$?
Question (1) is illustrated (as detailed above) in the case of even perfect numbers, almost perfect numbers which are powers of two, and spoof odd perfect numbers (otherwise known in the literature as Descartes numbers).
Question (2) is illustrated (as detailed above) in the case of even almost perfect numbers which are not powers of two, and odd perfect numbers.
REPLY [2 votes]: (1)
The inequalities do not guarantee existence of such numbers $K$.
Take $(x,y,z)=(3,2,2)$ where $y$ is prime with $\gcd(x,y)=1$.
Then, we get
$$\frac{\sigma(x)}{y^z}=\frac{4}{4}\le 1\le \frac{7}{3}=\frac{\sigma(y^z)}{x}$$
and
$$\alpha=\sigma(K)-2K=\sigma(36)-2\times 36=91-72=19$$
which is positive.
(2)
The inequalities do not predict nonexistence of such numbers $L$.
Take $(u,v,w)=(5,2,1)$ where $v$ is prime with $\gcd(u,v)=1$.
Then, we get
$$\dfrac{\sigma(v^w)}{u}=\frac{3}{5} < 1 < \frac{6}{2}=\dfrac{\sigma(u)}{v^w}$$
and
$$\beta=\sigma(L)-2L=\sigma(50)-2\times 50=93-100=-7$$
which is negative.<|endoftext|>
TITLE: Is it true that $\left(\frac{a^2+b^2+c^2}{a+b+c}\right)^{a+b+c}≥a^ab^bc^c$?
QUESTION [5 upvotes]: Let $a,b,c\in\mathbb{R}_{>0}$. Is it true that:
$$
\left(\frac{a^2+b^2+c^2}{a+b+c}\right)^{a+b+c}≥a^ab^bc^c
$$
I remarked that the inequality is (a bit weirdly) homogeneous, but couldn't use it. Also directly taking the logarithm doesn't seem to help; how to decide wether it's true?
REPLY [4 votes]: Since $a,b,c>0$,
\begin{align}
\frac{a \log a + b \log b + c \log c}{a+b+c} \le \log \left(\sum_{cyc}a\times \frac{a}{a+ b+ c}\right)
\end{align}
by Jensen's inequality on the $\log$. Taking exponent gives the required result.<|endoftext|>
TITLE: How to calculate the negative half power of a matrix
QUESTION [5 upvotes]: I have a square matrix called A. How can I find $A ^ {-1/2}$. Should I compute $a_{ij} ^ {-1/2}$ for all of its elements?
Thanks
REPLY [2 votes]: Firstly you will need to pick a branch of the square root function over the field of which the elements of your matrix belong. Once you have done that,
If $A$ is diagonalizable you can do as @Annalise writes.
If $A$ is orthogonally diagonalizable you can do as @Junning Li writes.
However for example if $A$ is not diagonalizable, then it can maybe still be put on some canonical form:
$$A = TCT^{-1}$$
Where $C$ can be block-diagonal matrix. In this case we can do approximate square root by trying some power series expansion on the diagonal blocks of $C$. However this is not guaranteed to make any sense. It will depend a lot on application.<|endoftext|>
TITLE: What's the explanation for these (infinitely many?) Ramanujan-type identities?
QUESTION [14 upvotes]: Define the function,
$$F(\beta) := \sqrt[3]{\beta+x_1}+\sqrt[3]{\beta+x_2}+\sqrt[3]{\beta+x_3}\tag1$$
where,
$$x_1 =2\cos\big(\tfrac{2\pi }{7}\big),\;x_2 =2\cos\big(\tfrac{4\pi }{7}\big),\; x_3 = 2\cos\big(\tfrac{8\pi }{7}\big)$$
or the $x_i$ are the roots of the cubic $x^3+x^2-2x-1=0$. We have two nice identities,
$$F(0) = \sqrt[3]{5-3\sqrt[3]7}$$
$$F(1) = \sqrt[3]{-4+3\sqrt[3]7}$$
The first one is by Ramanujan. (See The Problems Submitted by Ramanujan to the Journal of the Indian Mathematical Society, p. 9, by Bruce Berndt, et al.)
These two (and other pairs) can be explained by the negative and positive cases of $\pm \sqrt{d}$ in this answer. However, davidoff303 found a plethora of others,
$$F\Big(\frac{74}{43}\Big) = 2\,\sqrt[3]\frac{7^2}{43}$$
$$F\Big(\frac{5105}{11349}\Big) = 3\,\sqrt[3]\frac{7}{11349}$$
$$F\Big(\frac{-2306997866696}{1047656140569}\Big) = -10980\,\sqrt[3]\frac{7^2}{1047656140569}$$
$$F\Big(\frac{9658771264742899051}{5361029308457632889}\Big)= 13\cdot 127\cdot 1381\sqrt[3]{\frac{7}{5361029308457632889}}$$
and so on. Note that, for general rational $\beta$, then $\big(F(\beta)\big)^3$ is a root of a $9$th deg equation.
Q1: Are there infinitely many rational $\beta$ such that $\big(F(\beta)\big)^3$ is also rational, like the ones found by davidoff303? Is there a formula for $\beta$?
$\color{blue}{Update:}$
I formed the $9$th deg equation satisfied by $\big(F(\beta)\big)^3$. Acting on a hunch, it turns out that the $\beta$ used by davidoff303 satisfies the simple relation,
$$\beta^3-\beta^2-2\beta+1 =w^3\tag2$$
This can be transformed into an elliptic curve, and it has infinitely many rational points. And the $LHS$ of $(2)$ in fact is a factor of the discriminant of the $9$th deg eqn.
Q2: Is it true that if rational $\beta$ satisfies $(2)$, then $\big(F(\beta)\big)^3$ is also rational?
REPLY [2 votes]: This is a long comment/addendum to mercio’s answer. Given rational solutions to,
$$\beta^3-\beta^2-2\beta+1 = w^3\tag1$$
As pointed out by mercio, if we define,
$$X = 3\beta^2-2\beta-2+(3\beta-1)w+3w^2 \\Y = 9\beta^3-9\beta^2-11\beta+3w(3\beta^2-2\beta-2)+3w^2(3\beta-1)$$
then this obeys the elegant relation,
$$(Y+2)(Y+9) = X^3$$
However, if we require $\beta$ such that
$$Y+2 = u^3\tag2$$
$$Y+9 = v^3\tag3$$
holds separately, then we can define $\big(F(\beta)\big)^3$ as,
$$\big(F(\beta)\big)^3 = \frac{3(t+19)(u+v)+(3uv+7)(27u^4v-63u^3+105uv-196)}{t+19}\tag4$$
where,
$$t=27(u^3+3)(u^3+4)$$
Thus, if $\beta$ satisfies the rational Diophantine conditions $(2),(3)$ in addition to $(1)$, then $\big(F(\beta)\big)^3$ is rational.
Example. Let $\beta = \tfrac{74}{43}$, we get $u=1,\;v=2,\;t=540,$ and using formula $(4)$,
$$\big(F(\tfrac{74}{43})\big)^3 = \frac{2^3\cdot7^2}{43}$$
the same as in the post above. All six examples in the post obey $(1)$, but only the last four obey all $(1), (2),(3)$. Presumably there are infinitely many rational $\beta$ that obey all three conditions, but I do not know how to prove it so.<|endoftext|>
TITLE: What is the regularity of the greatest eigenvalue of the Hessian matrix?
QUESTION [8 upvotes]: Let $f:\mathbb{R}^n\to\mathbb{R}$ be twice continuously differentiable, i.e. $f\in\mathcal{C}^2\left(\mathbb{R}^n\right)$. Define $\lambda_f:\mathbb{R}^n\to\mathbb{R}$ as the function which associates $\vec x$ with the greatest eigenvalue of the Hessian matrix of $f$ in $\vec x$. Then what can we say about the regularity of $\lambda_f$? Is it continuous/differentiable?
The question came to my mind at todays analysis exam. I have absolutely no idea how one may argue; the definition of $\lambda_f$ doesn't seem to be easily useable.
Edit:
Or even more generally: is the function $\Lambda_f:\mathbb{R}^n\to\mathbb{R}^n$ which assignes to $\vec x$ the ordered $n$-tuple (with multiplicities) of the Hessian of $f$ at $\vec x$ (i.e. $\Lambda_f\left(\vec x\right)=\left(\lambda_1,...,\lambda_n\right)$ where $\lambda_1≥...≥\lambda_n$ are the eigenvalues of $H_f\left(\vec x\right)$) continuous/differntiable? If so, can we calculate the Jacobian matrix?
REPLY [4 votes]: @ Bill Cook gave a good answer (as usual). Yet, we can say more about the considered question.
Firstly, if you consider a $C^2$ function, then its Hessian is continuous and you cannot expect better than continuity for its eigen-elements. On the other hand, considering the spectrum of a symmetric matrix $A$ as an ordered n-tuple ($\lambda_1\geq \cdots \geq \lambda_n$) is (in general) a bad idea; then, you cannot (in general) construct a differentiable parametrization of the spectrum; you obtain only a continuous parametrization (even if $f\in C^{\infty})$; in fact, one has a little more: the function $A\in S^n\rightarrow ordered\;spectrum(A)$ is locally Lipschitz continuous.
Let $S_n$ be the set of symmetric real matrices of dimension $n$. There exists a precise result when the symmetric matrix depends analytically on one parameter.
Proposition. Assume that $t\in\mathbb{R}\rightarrow M_t\in S_n$ is analytic. Then the eigenvalues and a basis of (unit length) eigenvectors of $M_t$ are globally analytically parametrizable (even if the eigenvalues present some mutiplicities; moreover, as said above, the natural ordering of the eigenvalues is not met).
Remark 1. That works also when $t\in\mathbb{R}\rightarrow M_t\in S_n$ is smooth; we must add the condition that two continuous curves $(t\rightarrow \lambda_i(t),t\rightarrow \lambda_j(t))$ (where $(\lambda_i(t),\lambda_j(t))$ are any couple of eigenvalues of $M_t$) are the same or intersect only a finite number of times.
According to the Bill's counterexample, the above result is not valid when the entries of our symmetric matrix depend on more that one parameter (here $A(x,y))$; better, Bill shows that $Hess(f)(x,y)$ may be not-differentiable in a neighborhood of a multiple eigenvalue.
Conclusion. Let $f\in C^{\infty}$. If the largest eigenvalue of $Hess(f)(x)$ is always simple, then your function $\lambda_f$ is $C^{\infty}$ and there is a $C^{\infty}$ parametrization of "the" associated eigenvector; otherwise, it is locally Lipschitz continuous and may be not $C^1$ and, moreover, "the" associated eigenvector may be not $C^0$.
Remark. Proposition and Remark 1 are also valid when $M_t$ is normal -cf reference 2 Theorem (A); note that Theorem (B) gives (complicated) results when there are several parameters-.
References. 1. http://www.mat.univie.ac.at/~michor/roots.pdf
https://arxiv.org/pdf/1111.4475v2.pdf
EDIT. About your last question, assume that the eigenvalues of $Hess(f)_x$ are simple (for every $x$) and $f\in C^{\infty}$. Then the
Jacobian of the function $x\in \mathbb{R}^n\rightarrow \lambda_1>\cdots>\lambda_n\in\mathbb{R}^n$ exists; more precisely, $\dfrac{\partial \lambda_i}{\partial x_j}$ can be written using "the" unit eigenvector $u_i$ associated to $\lambda_i$ and not its derivative.
Proof. Let $A_x=[a_{rs}]=Hess(f)_x$. It is known (Hadamard) that $\dfrac{\partial \lambda_i}{\partial a_{rs}}=u_i^T\dfrac{\partial A}{\partial a_{rs}}u_i=(u_i)_r(u_i)_s$.
Thus $\dfrac{\partial \lambda_i}{\partial x_j}=\sum_{rs}\dfrac{\partial \lambda_i}{\partial a_{rs}}\dfrac{\partial a_{rs}}{\partial x_j}=\sum_{rs}(u_i)_r(u_i)_s\dfrac{\partial^3 f}{\partial x_r\partial x_s\partial x_j}$.<|endoftext|>
TITLE: About Jean-Yves Girard
QUESTION [14 upvotes]: I am student and I'm studying linear logic. I saw a quote in a book:
"I'm not a linear logician" - Jean-Yves Girard. Tokyo, April 1996.
I searched on Google but I did not find the context of why he said it. What he meant by that phrase?
REPLY [19 votes]: As suggested by @rschwieb, I sent an email to Girard. Here is his answer:
Hi,
The general idea is that logic should not have an adjective: look
at modal, non-monotonic, paraconsistent logics in which the
adjective is a sort of negation, like in « military justice ».
Linear logic is just a way to do plain logic, i.e. to study pure reasoning.
My position was that linearity is a question of fine structure: even if you
are concerned with intuitionism, the use of linearimplication can be very
helpful.
More recently I discovered that predicate calculus is based on a mistake,
namely the idea of « a property of individuals » P(t): we cannot speak of
« blue » but only say « the sky is blue ». This realistic contraption makes
equality a nonsense. It is indeed possible to expel the « individuals » and
replace them, when needed, with linear propositions T, U, V, in which case
equality becomes linear equivalence. For this we absolutely need linearity
since we can prove (classically or intuitionistically) that there are no
more than two « individuals ».
To sum up, I am not a linear logician because the natural treatment of logic
compels us into using linearity, this independently of any commitment.
Best regards,
J-YG<|endoftext|>
TITLE: Finding Delta Algebraically for a Given Epsilon?
QUESTION [10 upvotes]: For the limit $$\lim_{x\to 5}\sqrt{x-1}=2$$ find a $\delta>0$ that works for $\epsilon=1$.
In another words, find a $\delta>0$ such that for all $x$, $$0<|x-5|<\delta \implies |\sqrt{x-1}-2|<1$$
Ok so here's what I did...
$$|\sqrt{x-1}-2|<1$$
$$-1<\sqrt{x-1}-2<1$$
$$1<\sqrt{x-1}<3$$
$$1
TITLE: Is every finite group a normal subgroup of a symmetric group?
QUESTION [11 upvotes]: By Cayley's theorem, we know that for any finite group $G$, there exists $N \in \mathbb{N}$ such that $G$ is isomorphic to a subgroup of $S_N$, the symmetric group on $N$ letters. Can we prove that for every finite group $G$ there is some symmetric group $S_N$ such that $G$ is isomorphic to a $normal$ subgroup of $S_N$?
REPLY [3 votes]: In general (i.e. for $N \neq 4$), the only normal subgroups of $S_N$ are $S_N$ itself, $A_N$, and $1.$ Therefore no, because most $G$ will not map to one of these. ($S_4$ has an additional normal subgroup, the Klein $4$-group hiding in it.)<|endoftext|>
TITLE: A deformation retract that is not a strong deformation retract
QUESTION [6 upvotes]: In Lee's Introduction to Topological Manifolds, problem 7-12 asks to show that $\{(1,0)\}$ is a deformation retract, but not a strong deformation retract of the subspace of the plane
$$ X = \bigcup_{n=0}^\infty L_n$$
where $L_0$ is the line segment connecting $(1,0)$ to the origin and $L_n$ is the line segment connecting the origin to $(1,1/n)$ for $n \geq 1$. The part I'm struggling with is to show that it is not a strong deformation retract. If it were, then there would be a continuous map $H: X \times [0,1] \rightarrow X$ with
$$H(x,0) = x, \quad \forall x \in X;\\
H(x,1) = (1,0), \quad \forall x \in X; \\
H((1,0),t) = (1,0) \quad \forall t \in [0,1]$$
I was hoping a contradiction would pop out somewhere, but no luck. Can someone give a hint or suggest a different approach?
REPLY [8 votes]: OUTLINE: Let $H:X\times[0,1]\to X$ with $H|_{X\times\{0\}}=\mathrm{id}_X$ and $H|_{X\times\{1\}}=c_{(1,0)}$ the constant map to $\{(1,0)\}$ be continuous. Consider the sequence of points $p_n=(1,\frac{1}{n})$. Now prove (or recall) the following facts:
There is a characterization of continuity from elementary analysis in terms of convergent sequences in the domain (it holds in general for Hausdorff spaces).
Every bounded infinite sequence of reals has a convergent subsequence.
The sequence $p_n\to(1,0)$ as $n\to\infty$.
For all $n\in\mathbb{Z}^+$, every path $\alpha_n$ from $p_n$ to $(1,0)$ in $X$ has a time $t_n\in[0,1]$ such that $\alpha_n(t_n)=(0,0)$.
The map $H|_{\{p_n\}\times[0,1]}$ determines a path from $p_n$ to $(1,0)$ for all $n\in\mathbb{Z}^+$.
Used together these facts will imply that there is a time $t_0\in[0,1]$ such that $H((1,0),t_0)=(0,0)$; in particular $H$ does not fix $(1,0)$ and is thus not a strong deformation retraction map onto $\{(1,0)\}$. As $H$ was arbitrary there can be no strong deformation retraction to $\{(1,0)\}$ as desired.<|endoftext|>
TITLE: What are some fields that intersect topology and number theory?
QUESTION [6 upvotes]: I see that number theory is studied from the algebraic and analytics aspects, but I have not seen any approach from topology or axiomatic set theory (using them to investigate the properties or numbers and open problems in number theory). What are some topics intersecting them?
REPLY [4 votes]: An arithmetic group is a group determined as the integer points of an algebraic group. One special type of hyperbolic manifolds that are at least somewhat better understood are the manifolds arising as $\Bbb H^3/G$ where $G$ is an arithmetic subgroup of $PSL(2,\Bbb C)$.
Two such manifolds are commensurable (have common finite sheeted covers) if and only if two invariants agree. A number field and a quaternion algebra associated to the manifold. (these invariants were generalized to all hyperbolic 3-manifolds in this very famous paper of Neumann and Reid http://www.math.columbia.edu/~neumann/preprints/nrarith.pdf)
For something slightly closer to my own interest, the study of symmetric bilinear forms over $\Bbb Z$ shows up consistently in 3 and 4-manifolds, but is really a part of number theory. For a 4-manifold $X$ the bilinear form (the "intersection form") $H^2(X) \times H^2(X) \to \Bbb Z$ given by the cup product and evaluating on the fundamental class is a very useful invariant.
Freedman proved that this invariant completely determines smooth simply-connected 4-manifolds up to homeomorphism. Donaldson famously showed that if such a form were positive definite (for a closed simply-connected 4-manifold), than the form was diagonizable over $\Bbb Z$. The reproofs of Donaldson's result via Seiberg-Witten equations and Heegard Floer homology by Kronheimer and Mrowka, and Oszvath and Szabo respectively (https://arxiv.org/pdf/math/0110170v2.pdf for the HF one) in fact requires a nontrivial number theoretic result involving no topology by N. Elkies (https://arxiv.org/abs/math/9906019). It is still a very open question which of the forms can be realized as the intersection form of a closed orientable simply-connected 4-manifold.<|endoftext|>
TITLE: How to prove this statement? (Real analysis)
QUESTION [6 upvotes]: This might be the basic question in real analysis.
A function $f$ is $ C^2 $ function on the closed interval$ [0,1]$
Also the function $ f $ is satisfying $ f(0) = f(1) =0 $
Plus, $\vert f''(x) \vert \le \ A $ on the open interval $(0,1)$
Show $\vert f'(x) \vert \le \frac A2 $ on the interval $(0,1]$
I tried many times through the Rolle's thm, Mean value thm etc.
But failed. Please give me some hints.
REPLY [6 votes]: Two Taylor expansions at $x\in(0,1)$ are
\begin{eqnarray}
0&=&f(0)=f(x)+f'(x)(0-x)+\frac{f''(a)}{2}x^2,\\
0&=&f(1)=f(x)+f'(x)(1-x)+\frac{f''(b)}{2}(1-x)^2.
\end{eqnarray}
Here $a,b\in(0,1)$, hence $|f''(a)|\le A$ and $|f''(b)|\le A$.
The second one minus the first one gives
$$
f'(x)=\frac{f''(a)}{2}x^2-\frac{f''(b)}{2}(1-x)^2.
$$
Now estimate
$$
|f'(x)|\le\frac{A}{2}(\underbrace{x^2+(1-x)^2}_{\le 1})\le\frac{A}{2}.
$$
The function $f'(x)$ is continuous, so the estimate can be extended to the closed interval.<|endoftext|>
TITLE: Quasi-newton methods: SR1 and BFGS inverse update
QUESTION [6 upvotes]: In Numerical Optimization by Nocedal and Wright, (http://home.agh.edu.pl/~pba/pdfdoc/Numerical_Optimization.pdf) Chapter 2 on unconstrained optimization, page 25 top, the authors claim that
"The equivalent formula for SR1, $$B_{k+1} = B_k + \frac{(y_k - B_k s_k) (y_k - B_k s_k)^T} {(y_k - B_k s_k)^T s_k}$$
and BFGS, $$B_{k+1} = B_k - \frac{B_ks_ks_k^T}{s_k^TB_ks_k} + \frac{y_ky_k^T}{y_k^Ts_k} $$
applied to the inverse approximation $$H_k = B_k^{-1} (definition) $$
is $$H_{k+1} = (I - \rho_k s_k y_k^T)H_k (I - \rho_k y_k s_k^T) + \rho_k s_k s_k^T$$ where $\rho_k = \frac{1}{y_k^T s_k}$"
I have expanded the inverse update given above to
$$H_{k+1} = H_k - \frac{H_ky_ks_k^T}{y_k^T s_k} - \frac{s_k y_k^T H_k}{y_k^T s_k} + \frac{s_k y_k^T H_k y_k s_k^T}{(y_k^T s_k)^2} + \frac{s_k s_k^T}{y_k^T s_k}$$
But from here, I cannot algebraically manipulate that into either of the non-inverse update formulas.
As well, I do not understand why, given that the SR1 and BFGS formulas are different updates with different guarantees, that the authors claim that they have the same inverse update.
===========================
EDIT: I still can't get the BFGS inverse update formula.
Here's my work:
I'm trying to do the updates separately since I cannot find a way to make the two rank-1 matrices form the product $UV^T$.
Using the higher order Sherman-Morrison formula,
$$B_k' = B_k - \frac{B_k s_k s_k^T B_k}{s_k^T B_k s_k}$$ choosing $$U=-\frac{1}{C} B_k s_k s_k^T$$ and $$V = B_k^T$$
gives
$${B'}_k^{-1} = \frac{1}{C} s_k s_k^T$$
$$B_{k+1} = B'_k + \frac{y_k y_k^T}{y_k^T s_k}$$ choosing $$U_2 = \frac{y_k}{K}$$ and $$V_2 = y_k$$
which gives
$$B_{k+1}^{-1} = -\frac{{B'}_k^{-1} y_k y_k^T {B'}_k^{-1}}{K} = -\frac{1}{K} (\frac{1}{C} s_k s_k^T) y_k y_k^T (\frac{1}{C} s_k s_k^T)$$ does not work out to give the update that I want.
REPLY [4 votes]: You are right, the inverse approximation (2.21) seems to be for BFGS only. Compare with (6.17), page 140. The inverse approximation for SR1 is given in (6.25), page 144. To obtain inverses one applies the Sherman–Morrison formulae (A.27) and (A.28), page 612.
Edit:
Starting with the RHS of (6.19) and omitting subscript $k$
$$
B-\frac{Bss^TB}{s^TBs}+\frac{yy^T}{y^Ts}=\underbrace{B}_{A}+
\underbrace{\begin{bmatrix}Bs & y\end{bmatrix}}_{U}
\underbrace{\begin{bmatrix}-\frac{1}{s^TBs} & 0\\0 & \frac{1}{y^Ts}\end{bmatrix}
\begin{bmatrix}s^TB\\ y^T\end{bmatrix}}_{V^T}.
$$
Then with $B^{-1}=H$ the inverse of the RHS is:
\begin{align*}
(A+UV^T)^{-1}&=H-H\begin{bmatrix}Bs & y\end{bmatrix}
\left(I+\begin{bmatrix}-\frac{1}{s^TBs} & 0\\0 & \frac{1}{y^Ts}\end{bmatrix}
\begin{bmatrix}s^TB\\ y^T\end{bmatrix}H\begin{bmatrix}Bs & y\end{bmatrix}\right)^{-1}\begin{bmatrix}-\frac{1}{s^TBs} & 0\\0 & \frac{1}{y^Ts}\end{bmatrix}\begin{bmatrix}s^TB\\ y^T\end{bmatrix}H\\
&=H-\begin{bmatrix}s & Hy\end{bmatrix}
\left(\begin{bmatrix}-s^TBs & 0\\0 & y^Ts\end{bmatrix}+
\begin{bmatrix}s^TB\\ y^T\end{bmatrix}H\begin{bmatrix}Bs & y\end{bmatrix}\right)^{-1}\begin{bmatrix}s^T \\ y^TH\end{bmatrix}\\
&=H+\frac{1}{y^Tss^Ty}\left(\begin{bmatrix}s & Hy\end{bmatrix}
\begin{bmatrix}y^T+y^Ty & -s^Ty\\-y^Ts & 0 \end{bmatrix}\begin{bmatrix}s^T \\ y^TH\end{bmatrix}\right)\\
&=H+\frac{1}{y^Tss^Ty}\left(sy^Tss^T+sy^THys^T-Hyy^Tss^T-ss^Ty^TH\right)\\
&=H+\left(1+\frac{y^THy}{y^Ts}\right)\frac{ss^T}{s^Ty}-\frac{Hys^T+(Hys^T)^T}{y^Ts}\\
\end{align*}<|endoftext|>
TITLE: Reference about the surgery of Ricci flow
QUESTION [7 upvotes]: I roughly read the Topping's LECTURES ON THE RICCI FLOW. There does not seem to be an introduction on surgery. Seemly, it is enough to deal singularity by blow up. Then, in order to know surgery, I read the Perelman's Ricci flow with surgery on three-manifold. But it is not easy to read. Are there any books that introduce the surgery, and likely to Topping's book (I feel Topping's book is friendly to read) ?
REPLY [2 votes]: A group of five mathematicians (Bessières, Besson, Boileau, Maillot and Porti) have written a book aiming to give a complete proof of the Geometrisation Conjecture from pre-Perelman's results.
As such, they present a version of Ricci flow with surgery. I hope the book will appear suitable to you (I haven't read it in detail, but the introduction and first chapters seem remarkably clear to me).<|endoftext|>
TITLE: Finding the sum of numbers between any two given numbers
QUESTION [5 upvotes]: I tried to derive this type of formula and ended up with this . But it's not holding true for all the numbers. Can you please tell what I've done wrong !!
REPLY [10 votes]: Between $\alpha$ and $\beta$, there are $\beta - \alpha + 1$ numbers. We need
\begin{align*}
S &= \alpha + (\alpha + 1) + \cdots + \beta \\
&= \beta + (\beta - 1) +\cdots + \alpha
\end{align*}
Adding vertically, we have
\begin{equation*}
2S = (\beta-\alpha+1)(\alpha+\beta)
\end{equation*}
Hence
\begin{equation*}
S = \frac{(\beta-\alpha+1)(\alpha+\beta)}{2}
\end{equation*}
This "reverse and add" technique is due to Gauss and can be used to sum any arithmetic progression as well.<|endoftext|>
TITLE: Critical point of Mabuchi energy functional has zero scalar curvature
QUESTION [5 upvotes]: I am currently reading the proof of convergence of Kähler-Ricci flow in the case $c_1(M)=0$ from Song, Weinkove. On page 45 he defines the term Mabuchi's $K$-energy functional:
$$ \frac{d}{dt}\mathrm{Mab}_{\omega_0}(\phi_t)=-\int_M\dot{\phi}_tR_{\phi_t}\omega_{\phi_t}^n$$
on the space:
$$\mathrm{PSH}(M, \omega_0)=\left\{\phi\in C^\infty(M)\mid \omega_0+\frac{\sqrt{-1}}{2\pi}\partial\bar{\partial}\phi>0\right\}$$
and where $\omega_{\phi_t}=\omega_0+\frac{\sqrt{-1}}{2\pi}\partial\bar{\partial}\phi_t$ and $R_{\phi_t}$ is the scalar curvature of $\omega_{\phi_t}$.
Then he directly claims that if $\phi_\infty$ is a critical point of $\mathrm{Mab}_{\omega_0}$ then $\omega_\infty$ has zero scalar curvature. I do not see why is that so. I am not an expert in this realm, so I might miss some well-known facts or obvious results.
Any comment is welcome!
REPLY [2 votes]: $\newcommand{\dd}{\partial}$It's a fundamental idiom of the calculus of variations that if $\int fg = 0$ for all $f$, then $g \equiv 0$.
The first displayed equation in your question isn't the Mabuchi energy itself, but the variation of the Mabuchi energy along an arbitrary path of smooth Kähler metrics. At a critical point, the variation vanishes (by definition) for all $\dot{\phi}_{t}$. Consequently,
$$
-\int_{M} \dot{\phi}_{\infty} R_{\infty}\, \omega_{\infty}^{n} = 0
\quad\text{for all $\dot{\phi}_{\infty}$,}
$$
namely, for every path through $\omega_{\infty}$ with $\dot{\omega}_{\infty} = \frac{i}{2\pi}\dd\bar{\dd} \dot{\phi}_{\infty}$. By the fundamental idiom, $R_{\infty} \equiv 0$, i.e., $\omega_{\infty}$ has vanishing scalar curvature.<|endoftext|>
TITLE: Brumer quintic polynomials - is there a general formula for the roots?
QUESTION [5 upvotes]: There exist a family of quintic polynomials, called Brumer's polynomials (or Kondo-Brumer), which have the form:
$$x^5+(a-3)x^4+(-a+b+3)x^3+(a^2-a-1-2b)x^2+bx+a,~~~a,b \in \mathbb{Q}$$
According to Wikipedia these polynomials are solvable in radicals.
Is there a general formula for roots of these polynomials? Or at least the closed form for some special cases?
I searched the web, but only found papers discussing the group properties (for example here) or other properties of Brumer's polynomials. Nothing about the roots.
Edit
I'm starting a bounty, and I would like either of these things:
General solution (at least one root), depending on $a,b$ - only if such a solution exists, and is short enough to write here in closed form.
Some methods for obtaining this solution - again, if it will lead to a form of the solution more compact and simpler than the general way to solve an arbitrary solvable quintic.
Solutions to some special cases (for some values of $a,b$ with $b \neq 0$)
A proof that no such simple solution is possible for this family of quintics.
REPLY [2 votes]: (Addendum to my answer.) Since the primary objective of the OP is to find solvable quintics with a "simple" solution, then we can add the "depressed" multi-parameter family,
$y^5+10cy^3+10dy^2+5ey+f = 0\tag{1}$
where the coefficients obey the quadratic in $f$,
$(c^3 + d^2 - c e) \big((5 c^2 - e)^2 + 16 c d^2\big) = (c^2 d + d e - c f)^2
\tag{2}$
Solve for $f$. Define this quintic's Lagrange resolvent as,
$$(z^2+u_1z-c^{5})(z^2+u_2z-c^{5}) = 0$$
where the $u_i$ are the two roots of the quadratic,
$$u^2-fu+(4c^5-5c^3e-4d^2e+ce^2+2cdf) = 0$$
then the solution to $(1)$ is,
$y = z_1^{1/5}+z_2^{1/5}+z_3^{1/5}+z_4^{1/5}\tag{3}$
Note that appropriate choices of the $3$ free parameters $c,d,e$ can yield rational $f$.
Example: A particular case is the Lehmer quintic,
$$x^5 + n^2x^4 - (2n^3 + 6n^2 + 10n + 10)x^3 + (n^4 + 5n^3 + 11n^2 + 15n + 5)x^2 + (n^3 + 4n^2 + 10n + 10)x + 1=0$$
Let $x = (y-n^2)/5$ to transform it to depressed form $(1)$. Its transformed coefficients then obey $(2)$.<|endoftext|>
TITLE: Evaluating $\int_0^1 \frac{\arctan x \log x}{1+x}dx$
QUESTION [16 upvotes]: In order to compute, in an elementary way,
$\displaystyle \int_0^1 \frac{x \arctan x \log \left( 1-x^2\right)}{1+x^2}dx$
(see Evaluating $\int_0^1 \frac{x \arctan x \log \left( 1-x^2\right)}{1+x^2}dx$)
i need to show, in a simple way, that:
$\displaystyle \int_0^1 \dfrac{\arctan x \log x}{1+x}dx=\dfrac{G\ln 2}{2}-\dfrac{\pi^3}{64}$
$G$ is the Catalan's constant.
REPLY [2 votes]: Different approach:
start with applying integration by parts
$$I=\int_0^1\frac{\tan^{-1}(x)\ln(x)}{1+x}dx\\=\left|(\operatorname{Li}_2(-x)+\ln(x)\ln(1+x))\tan^{-1}(x)\right|_0^1-\int_0^1\frac{\operatorname{Li}_2(-x)+\ln(x)\ln(1+x)}{1+x^2}dx$$
$$=-\frac{\pi^3}{48}-\int_0^1\frac{\operatorname{Li}_2(-x)}{1+x^2}dx-\color{blue}{\int_0^1\frac{\ln(x)\ln(1+x)}{1+x^2}dx}\tag1$$
From $$\operatorname{Li}_2(x)=-\int_0^1\frac{x\ln(y)}{1-xy}dy$$
it follows that
$$\int_0^1\frac{\operatorname{Li}_2(-x)}{1+x^2}dx=\int_0^1\frac1{1+x^2}\left(\int_0^1\frac{x\ln(y)}{1+xy}dy\right)dx$$
$$=\int_0^1\ln(y)\left(\int_0^1\frac{x}{(1+x^2)(1+yx)}dx\right)dy$$
$$=\int_0^1\ln(y)\left(\frac{\pi}{4}\frac{y}{1+y^2}-\frac{\ln(1+y)}{1+y^2}+\frac{\ln(2)}{2(1+y^2)}\right)dy$$
$$=-\frac{\pi^3}{192}-\color{blue}{\int_0^1\frac{\ln(y)\ln(1+y)}{1+y^2}dy}-\frac12\ln(2)\ G\tag2$$
By plugging $(2)$ in $(1)$, the blue integral magically cancels out and we get $I=\frac12G\ln2-\frac{\pi^3}{64}$.<|endoftext|>
TITLE: Prove the uniform convergence of $f_{1}(x)= \sqrt x , f_{n+1}(x)=\sqrt{x+f_n(x)}$ in $[0,\infty]$
QUESTION [5 upvotes]: As far as I understand most of these questions use the M-test, but I can't find a series that suffices.
REPLY [3 votes]: Wait. $\lim\limits_{x\to0}f_n(x)=0$, but $\lim\limits_{x\to0}f_\infty(x)=1$. There is no uniform convergence!
Apparently, $f_\infty(x)=\sqrt{x+{1\over4}}+{1\over2}$
You might want to show the uniform convergence on $(1,\infty)$ or $(\varepsilon,\infty)$; that's another story, and a simple one at that.<|endoftext|>
TITLE: Finding $f \in L^p(\mathbb{R}^n)$ such that $\hat{f} \notin L^p(\mathbb{R}^n)$
QUESTION [5 upvotes]: I heard there were functions $f \in L^p(\mathbb{R}^n)$ such that $\hat{f} \notin L^p(\mathbb{R}^n)$. Is there a concrete example of such functions ?
Thanks in advance !
REPLY [4 votes]: Let $r > 0$. Then
$$
\int_{0}^{\infty}e^{-rx}e^{-isx}dx=\frac{1}{r+is}.
$$
The function $f(x)=e^{-rx}\chi_{[0,\infty)}(x)$ is in $L^1$, but $\hat{f}(s)=\frac{1}{\sqrt{2\pi}(r+is)}$ is not in $L^1$.<|endoftext|>
TITLE: Find all odd positive integers $n$ for which there exists odd positive integers $x_1,x_2,\ldots,x_n$ such that $x_1^2+x_2^2+\cdots+x_n^2=n^4$.
QUESTION [5 upvotes]: Find all odd positive integers $n$ for which there exists odd positive integers $x_1,x_2,..,x_n$ such that
$$x_1^2+x_2^2+\cdots+x_n^2=n^4\,.$$
My work so far:
For $n=3$, the equation $$x_1^2+x_2^2+x_3^2=81$$
has no solutions because if a solution exists the, in modulo $4$, we have
$$3\equiv 1\pmod{4}$$
which is a contradiction.
$n\ge 5$?
I need help here.
REPLY [7 votes]: all odd squares are $1 \pmod 8.$ This includes $n^4.$ Meanwhile,
$$ x_1^2 +x_2^2+ \cdots+ x_n^2 \equiv n \pmod 8. $$ So it is necessary to have $n \equiv 1 \pmod 8.$
In the other direction, all numbers $k \equiv 3 \pmod 8$ are the sum of three odd squares. This is a result of Gauss, and equivalent to the fact that all positive integers ae the sum of three triangular numbers, including $0$ if needed.
As a result, take any $n \equiv 1 \pmod 8.$ Take $x_4, x_5, \ldots, x_n$ to be anything (odd) you like, as long as the sum of squares is below $n^4.$ The leftover requires
$$ x_1^2 + x_2^2 + x_3^2 = n^4 - \left( x_4^2 + \cdots+ x_n^2 \right) $$
which can always be solved in (odd) integers.
After Greg's comment, found a fairly greedy solution that does not require quoting Gauss. We have $n \equiv 1 \pmod 8.$ Note $37 \equiv 5 \pmod 8.$
Let
$$ K = \frac{3n - 3}{8}, $$
$$ W = \frac{5n - 37}{8}, $$
so that
$$ K + W = n-5, $$
$$ 9K + W = 4n-8. $$
The solution will have $K$ of the $x_j$ equal to $3,$ so those squares are $9,$ and their sum is $9K.$ We will also have $W$ of the $x_j$ set to $1,$ so their sum is just $W.$ Then, with a total of $n$ (odd) squares,
$$ \color{red}{ (n^2 - 2)^2 + n^2 + n^2 + n^2 + (n-2)^2 + 9K + W = n^4} $$
With $n=9$ we get $K=3, W=1, $ $ \; \; 9^4 = 6561,$ then
$$ 79^2 + 9^2 + 9^2 + 9^2 + 7^2 + 9 \cdot 3 + 1 = 6241 + 81 + 81 + 81 + 49 + 27 + 1 = 6561. $$
With $n=17$ we get $K=6, W=6, $ $ \; \; 17^4 = 83521,$ then
$$ 287^2 + 17^2 + 17^2 + 17^2 + 15^2 + 9 \cdot 6 + 6 = 82369 + 289 + 289 + 289 + 225 + 54 + 6 = 83521. $$
With $n=25$ we get $K=9, W=11, $ $ \; \; 25^4 = 390625,$ then
$$ 623^2 + 25^2 + 25^2 + 25^2 + 23^2 + 9 \cdot 9 + 11 = 388129 + 625 + 625 + 625 + 529 + 81 + 11 = 390625. $$
With $n=33$ we get $K=12, W=16, $ $ \; \; 33^4 = 1185921,$ then
$$ 1087^2 + 33^2 + 33^2 + 33^2 + 31^2 + 9 \cdot 12 + 16 = 1181569 + 1089 + 1089 + 1089 + 961 + 108 + 16 = 1185921. $$<|endoftext|>
TITLE: Milnor's definition of smooth manifold
QUESTION [8 upvotes]: In Milnor's book "Topology from a differential viewpoint" on page one he defines a smooth manifold to be a subset $M \subset \mathbb R^n$ which is locally diffeomorphic to some open subset of $\mathbb R^k$, i.e. every point $x \in M$ has a neighborhood $U \subset \mathbb R^n$ such that $U \cap M = V$ for some open $V \subset \mathbb R^k$ . The usual definition I know is that a smooth manifold is a (hausdorff and second countable) topological space $M$ together with an open cover $\{U_{\alpha}\}$ and homeomorphisms $f_{\alpha} : U_{\alpha} \rightarrow V_{\alpha}$ such that $V_{\alpha} \subset \mathbb R^k$ are open and the transition functions $f_{\beta}f_{\alpha}^{-1}$ are smooth (where defined).
Question: How does the usual definition of smooth manifold imply
Milnor's definition of smooth manifold?
REPLY [6 votes]: You seem to be bothered by the fact that in Milnor's definition one talks of smooth maps, whereas in the chart definition, the $f_\alpha$ are only homeomorphisms. However, once we require the transition functions to be smooth, we can actually view the $f_\alpha$ as being smooth as well.
By the implicit function theorem, a $k$-manifold smoothly embedded in Euclidean $n$-space will be the graph of a smooth vector-valued function over a suitable coordinate $k$-plane in $\mathbb{R}^n$. This implies the smoothness of the transition functions. Conversely, of you have a smooth manifold (in the sense of smooth transition functions, etc.) then Whitney gives you a smooth embedding in Euclidean space.<|endoftext|>
TITLE: Finite commutative ring with unity and without nilpotent elements
QUESTION [5 upvotes]: Let $R$ be a commutative ring with unity such that for each $x \in R$
there exists a $n \in \mathbb{N}$, $n>1$, such that $x^n = x$. Then show that
$$
R\simeq F_{1}\times F_{2}\times \cdots\times F_{n}
$$
where $F_k$ are fields.
I am facing difficulty in proving the above without using Artin–Wedderburn theorem (or its proof). There exists an elementary proof?
REPLY [2 votes]: The same logic from this solution to a special case applies here, except that you disregard the comments about $F_2$ and $F_3$ and just settle for all the quotients by prime ideals being fields.
The battle plan is, briefly:
The intersection of all prime ideals is the zero ideal
The quotient by any prime ideal is in fact a field.
The ring embeds into a product of quotient rings of the form $R/P$ where $P$ is a prime ideal. (And of course, that is a product of fields.)
So such a ring is a subring of a product of fields, and is not necessarily the whole product, or a finite product.
If the ring is finite, or better yet, only has finitely many maximal ideals, then the injection above is into a product of only finitely many fields. A collection of maximal ideals is always comaximal, so the Chinese Remainder Theorem says the map is surjective, and so it is actually an isomorphism.<|endoftext|>
TITLE: When do $n+2$ points in $\mathbb{R}^n$ lie on a same $(n-1)$-sphere?
QUESTION [7 upvotes]: When $n=2$, the following results are well-known:
Proposition 1. Let $A,B,C,D$ be $4$ distinct points in $\mathbb{R}^2$. They are aligned or cocyclic if and only if: $$\left(\overrightarrow{CA},\overrightarrow{CB}\right)\equiv\left(\overrightarrow{DA},\overrightarrow{DB}\right)\mod \pi.$$
Proposition 2. (Ptolemy's theorem) Let $A,B,C,D$ be $4$ distinct points in $\mathbb{R}^2$. They are cocyclic if and only if one of the following equalities holds true: $$AB.CD\pm AC.DB\pm AD.BC=0.$$
In this recent question it is proven that whenever $n+1$ points in $\mathbb{R}^n$ do not lie in any affine hyperplane, they are on a unique $(n-1)$-sphere, which leads me to ask the following:
Question. Is there a necessary and sufficient condition to determine when $n+2$ points in $\mathbb{R}^n$ are on a same affine hyperplane or on a same hypersphere?
I easily derived from the equations of an affine hyperplane and of an hypersphere that if $x_i:=(x_{i,j})$ are $n+2$ points of $\mathbb{R}^n$, the $x_i$s are on a same hyperplane or lie on a same hypersphere if and only if:
$$\left|\begin{matrix}{x_{1,1}}^2+\cdots+{x_{1,n}}^2&x_{1,1}&\cdots&x_{1,n}&1\\{x_{2,n}}^2+\cdots+{x_{2,n}}^2&x_{2,1}&\cdots&x_{2,n}&1\\\vdots&\vdots&\ddots&\vdots&\vdots\\{x_{n+1,1}}^2+\cdots+{x_{n+1,n}}^2&x_{n+1,1}&\cdots&x_{n+1,n}&1\\{x_{n+2,1}}^2+\cdots+{x_{n+2,n}}^2&x_{n+2,1}&\cdots&x_{n+2,n}&1\end{matrix}\right|=0.$$
However, I am more interested in a characterization involving angles in the same way as in proposition 1. or distances like in proposition 2. In particular, in the case $n=3$ is there a necessary and sufficient condition expressing a relation between solid angles?
Regarding the case $n=3$, my guess would be to determine the set of points from where one can observe a given circle with a constant solid angle.
REPLY [4 votes]: Assume $x_0 = 0$ for simplicity and let $x_i' = \frac{x_i}{|x_i|^2}$ be the images of $x_i$'s under an inversion centered at $x_0$. By a well-known property of inversions, $x_0,\ldots,x_{n+1}$ lie on an affine $n-1$-plane or an $n-1$-sphere if and only if $x_1',\ldots,x_{n+1}'$ lie on an affine $n-1$-plane.
When the latter is expressed using the determinant, this probably yields a condition analogous to the one you stated. However, I feel that this point of view is more geometric in nature.<|endoftext|>
TITLE: Existence of a universal cover of a manifold.
QUESTION [9 upvotes]: Suppose $M$ is a manifold, topological or smooth etc. As a topological space $M$ is required to be primarily locally homeomorphic to $\Bbb R^n$, with some added things that don't come along with this, like a global Hausdorff condition mentioned here, second countability or paracompactness etc.
Mainly it would seem to rule out certain pathological examples, or to simplify proofs, rather than saying 'Let $M$ be a Hausdorff, Second countable, Manifold$\ldots$' every time.
From algebraic topology the existence of a universal covering space of a topological space $X$, required $X$ to be connected, locally path connected, and semi locally simply connected. In the course I did we said path connected, locally path connected and semi locally simply connected. I believe these are equivalent.
My question is: For a $M$ a manifold, does $M$ satisfy the existence criterion?
Or should I specifically require that $M$ is a connected manifold, and it would seem that locally path connected and semi locally simply connected come from the charts, or local homeomorphisms to $\Bbb R^n$.
REPLY [11 votes]: You are correct. Because each point in a manifold has a neighborhood homeomorphic to some Euclidean space, any manifold is locally contractible, which implies that is it both locally path connected and locally simply connected. Therefore if we restrict our attention to connected manifolds (which we usually do), we see that all manifolds admit universal covers (and these are also manifolds).<|endoftext|>
TITLE: Preserving equality between different mathematical objects
QUESTION [17 upvotes]: I'm taking an 'Intro to Higher Mathematics'-type course right now, were we learn about basic set theory, number theory, algebra, etc. and I had the following thought:
Say you're trying to solve a problem in some mathematical field $A$,
you have objects $a$ and $b$ within theory $A$. You realize that
objects $c$ and $d$ in the field $B$ are similar to objects $a$ and
$b$, and that your problem is easier to solve in field $B$. Is there a
way to make $a \equiv c$ and $b \equiv d$ so that you can go back and
forth between $A$ and $B$, being sure that whatever it is that you did
in $B$ is also valid in $A$, and viceversa?
Maybe an example would make what I'm trying to say clearer. (I'm still learning the basics, I'm aware that my knowledge is very limited, so excuse me if my choice of examples is not very good.)
Let $A = \{a,b,c,d\}$. Let $R$ be an equivalence relation on $A$, such that $$R=\{(a,a),(b,b),(c,c),(d,d),(a,b),(b,a),(a,c),(c,a),(b,c),(c,b)\}.$$
Now, let's say I want to make this into a directed graph, so let the set of vertices be all the elements of $A$ so that $V=A$ and the set of arcs be the equivalence relation on $A$ such that $E=R$. We have the following graph:
Now, let its adjacency matrix be
$$ M_{ij}= \left( \begin{array}{cccc}
1&1&1&0\\
1&1&1&0\\
1&1&1&0\\
0&0&0&1\\
\end{array} \right), $$ (where $a=1,b=2,c=3,d=4$ for $i,j$).
As we can see, we now have three different ways to approach the same object; now, suppose I want to do something with the initial relation, maybe solve some problem or exercise, my question is: up to what point can I use theorems or results in Graph Theory or Linear Algebra/Matrix Theory to help me solve the problem?
I know that perhaps my example is very shoddy, but my question is more in general: given some equivalence between some mathematical objects in different theories, is there a way to systematize all the things that I can and cannot do in one theory that preserve the equivalence between the objects I'm using?
E.g, in my previous example, say I operate exclusively on the adjacency matrix of the graph to try to solve my problem; maybe there's some theorem or proposition that makes my problem easier in Linear Algebra than in Set Theory, how can I prove that whatever result I get on the matrix is equivalent to the result I would get working on the sets alone?
To try to summarize, what I'm asking is:
How can I work on some problem from one branch of mathematics to another without losing the properties of the objects I'm working with, so that my results are valid.
Is there some way to generalize the idea of equivalence between theories.
REPLY [5 votes]: I would definitely say that Category Theory was invented to explore exactly the sort of thing you're asking about. The tricky part is that if the objects in question are similar but not identical (unlike in your example, where you have literally given different representations of the exact same object), then the analog is not perfect, and some things you know about objects $a$ and $b$ may not have analogs or even be meaningful for $c$ and $d$. (In category theory, one deals with forgetful functors between categories, which capture this concept in a general way).<|endoftext|>
TITLE: What does it mean when dx is put on the start in an integral?
QUESTION [22 upvotes]: I have seen something like this before: $\int \frac{dx}{(e+1)^2}$. This is apparently another way to write $\int \frac{1}{(e+1)^2}dx$.
However, considering this statement: $\int\frac{du}{(u-1)u^2} = \int du(\frac{1}{u-1}-\frac{1}{u}-\frac{1}{u^2})$. On the left side, $du$ is moved, If I had to evaluate an integral that is written in this way, how would I expand it into the usual $\int f(x)dx$ form?
(From the comments) Is this truly a product and if not why is it commutative?
REPLY [3 votes]: I think as others are getting at, the confusion probably arises from the (great) question: what is $dx$ anyway? I will try and give an intuitive answer that should help a little with the confusion.
The rigorous answer is that $dx$ is a differential 1-form, which you will learn about if you do real analysis, but I think a perfectly good intuitive response is it is just a "tiny bit" of $x$. You can see this by noting that the integral is roughly defined as
$$
\lim_{\Delta x\rightarrow 0}\sum f(x*)\Delta x
$$
For some $x*$ inside each interval of shrinking size. $dx$ is then exactly this arbitrarily small $\Delta x$. As such, it is just an object that commutes with multiplication like $\Delta x$, and can be moved around as you wish.
Thinking in this way, your question is the same as
$$
\sum f(x*)\Delta x=\sum \Delta xf(x*)
$$
where I hope you will believe me it is true.
Additionally, I think this intuition for differentials will be helpful when you get to multi-variable calculus and define the volume element
$$
\mathrm dv=\mathrm dx \mathrm dy \mathrm dz
$$
which can be thought of as a tiny cube.<|endoftext|>
TITLE: Why is the complex plane shaped like it is?
QUESTION [71 upvotes]: It's always taken for granted that the real number line is perpendicular to multiples of $i$, but why is that? Why isn't $i$ just at some non-90 degree angle to the real number line? Could someone please explain the logic or rationale behind this? It seems self-apparent to me, but I cannot actually see why it is.
Furthermore, why is the real number line even straight? Why does it not bend or curve? I suppose arbitrarily it might be strange to bend it, but why couldn't it bend at 0? Is there a proof showing why?
Of course, these things seem natural to me and make sense, but why does the complex plane have its shape? Is there a detailed proof showing precisely why, or is it just an arbitrary choice some person made many years ago that we choose to accept because it makes sense to us?
REPLY [15 votes]: Riemann
made it into a sphere:<|endoftext|>
TITLE: Proving $\sum_{k=1}^{n}{(-1)^{k+1} {{n}\choose{k}}\frac{1}{k}=H_n}$
QUESTION [14 upvotes]: I've been trying to prove
$$\sum_{k=1}^{n}{(-1)^{k+1} {{n}\choose{k}}\frac{1}{k}=H_n}$$
I've tried perturbation and inversion but still nothing. I've even tried expanding the sum to try and find some pattern that could relate this to the harmonic series but this just seems surreal. I can't find any logical reasoning behind this identity. I don't understand Wikipedia's proof. Is there any proof that doesn't require the integral transform?
REPLY [2 votes]: The following variation is Example 3 in section 1.2 of John Riordan's classic Combinatorial Identities.
Consider for $n=1,2,\ldots$
\begin{align*}
f_n&=\sum_{k=1}^n(-1)^{k+1}\binom{n}{k}\frac{1}{k}\\
&=\sum_{k=1}^n(-1)^{k+1}\left[\binom{n-1}{k}+\binom{n-1}{k-1}\right]\frac{1}{k}\\
&=f_{n-1}-\frac{1}{n}\sum_{k=1}^n(-1)^k\binom{n}{k}\\
&=f_{n-1}-\frac{1}{n}\left[(1-1)^n-1\right]\\
&=f_{n-1}+\frac{1}{n}\\
&=H_n
\end{align*}
since $f_1=1$.<|endoftext|>
TITLE: KL Divergence between the sums of random variables.
QUESTION [10 upvotes]: The relative entropy or Kullback–Leibler distance between
two probability density functions $g(x)$ and $f(x)$ is defined as
$$D(g\|f) = \int_{x} g(x)\log\frac{g(x)}{f(x)} dx .$$
We have two random variables $V$ and $W$,
\begin{equation*}
\begin{split}
&V=X_1+X_2, \text{where}\ X_1\sim g(x), X_2\sim f(x)\ \text{are independent},\\
&W=X_3+X_4, \text{where}\ X_3, X_4\sim f(x)\ \text{are independent}.
\end{split}
\end{equation*}
It is easy to show that
\begin{equation*}
\begin{split}
&V\sim G(x)=(g\ast f)(x),\\
&W\sim F(x)=(f\ast f)(x),
\end{split}
\end{equation*}
where $(g\ast f)(x) = \int g(\tau)f(x-\tau)d\tau$ is the convolution of $g$ and $f$.
The questions are:
Is it true that $D(g\|f)> D(G\|F)?$
Is it true that $\frac{1}{2}D(g\|f)> D(G\|F)?$
If we can prove 2, 1 is obviously true.
They are true for Poisson and Gaussian distributions, however, I can't prove for the general cases.
REPLY [2 votes]: I will reason in the domain of discret probabilities. As discret distributions are limits of continuous ones, I think we can prove that the result are valid in the domain of continuous densities.
Proof of 1
N.B strict inequality is false. Equality is evident when f=g
We note $g_i$ and $f_i$ the probabilities of g and f and $h_{ij}$ the conditional probabilities of $P(V / X_1)$
We just need the fact that V and W have the same conditional probabilities :
$P(V / X_1)=P(W / X_3)$
$$D(G \parallel F)=\sum_i{(\sum_j{g_jh_{ij}})}\log(\frac{\sum_k{}g_kh_{ik}}{\sum_k{}f_kh_{ik}})$$
We then use the Log-sum Inequality (cf https://en.wikipedia.org/wiki/Log_sum_inequality)
$$\log(\frac{\sum_k{}g_kh_{ik}}{\sum_k{}f_kh_{ik}}) \le \frac{\sum_k{g_kh_{ik}\log(\frac{g_k}{f_k})}}{\sum_k{g_kh_{ik}}}$$
So
$$D(G \parallel F)\le \sum_{i,k}g_kh_{ik}\log(\frac{g_k}{f_k})=\sum_k[g_k\log(\frac{g_k}{f_k})(\sum_ih_{ik})]=\sum_kg_k\log(\frac{g_k}{f_k})=D(g\parallel f)$$
Counter exemple showing that 2 is not true
Suppose Bernouilli distributions for each $X_i$ :
$P_f(X=0)=1-\epsilon$
$P_f(X=1)=\epsilon$
and the opposite for g :
$P_g(X=0)=\epsilon$
$P_g(X=1)=1-\epsilon$
We have :
$P_G(V=0)=P_G(V=2)=\epsilon-\epsilon^2$
$P_G(V=1)=1-2\epsilon+2\epsilon^2$
$P_F(W=0)=1-2\epsilon+\epsilon^2$
$P_F(W=1)=2\epsilon-2\epsilon^2$
$P_F(W=2)=\epsilon^2$
It is easy to show that for small $\epsilon$ :
$D(g \parallel f) \sim -\log(\epsilon)$ and $D(G \parallel F) \sim -\log(\epsilon)$
So $D(g \parallel f) \sim D(G \parallel F)$
Therefore, one can find an epsilon so that $\frac 1 2 D(g \parallel f) < D(G \parallel F)$<|endoftext|>
TITLE: Solve $4 \times2^x+3^x=5^x$ without any sort of calculator
QUESTION [6 upvotes]: Is there a way i can solve the following equation only by using high school mathematics?
$$4 \times2^x+3^x=5^x$$
I tried writing $5$ as $2+3$ but didn't get any result.
After that i tried to divide by $5^x$ and see how the function goes, but, unfortunately didn't got me somewhere either.
REPLY [2 votes]: You tried to use $2+3=5$ to no avail. A slightly different "flash of insight" approach would be to invoke Pythagoras: $3^2+4^2=5^2$.<|endoftext|>
TITLE: Points at infinity of a conic section and its eccentricity, foci, and directrix?
QUESTION [5 upvotes]: Background on projective geometry and conic sections; you might want to skip to the actual question
A conic section is analytically described as the zero-locus of points $(x,y)$ in the affine plane of a quadratic polynomial in two variables $x$ and $y$, i.e. the solutions to the equation
$$Ax^2+Bxy+Cy^2+Dx+Ey+F=0$$
Geometrically, these are the sections obtained by intersecting, in $3$-dimensional affine space, the plane $z=1$ with the cone consisting of the solutions $(x,y,z)$ to the homogeneous quadratic equation
$$Ax^2+Bxy+Cy^2+Dxz+Eyz+Fz^2=0$$
From a different point of view, the cone is composed not of points $(x,y,z)$ in $3$-dimensional space, but of lines through the origin $(0,0,0)$ in $3$-dimensional space. This is reflected by the homogeneity of the equation: if $(x,y,z)$ is a solution, then so is $(\lambda x,\lambda y,\lambda z)$ for any scalar $\lambda$.
Since any line is determined by two distinct points it passes through, lines through the origin $(0,0,0)$ can be represented by homogeneous coordinates $[x,y,z]$ where not all of $x,y,z$ are zero. Each triple $[x,y,z]$ represents the line going through $(0,0,0)$, but two triples represent the same line if they differ by a scalar: i.e. $[x,y,z]$ and $[u,v,w]$ represent the same line if there exists a (non-zero) $\lambda$ such that $x=\lambda u$, $y=\lambda v$, and $z=\lambda w$.
With homogeneous coordinates under our belt, we can define the projective plane to be the space of lines through the origin $(0,0,0)$ in $3$-dimensional affine space. Then we also have a cone in the projective plane given by the solutions $[x,y,z]$ to the homogeneous equation
$$Ax^2+Bxy+Cy^2+Dxz+Eyz+Fz^2=0$$
For the purposes of this question, each point in the projective plane, i.e. line through the origin $(0,0,0)$ is one of two types: the lines $[x,y,z]$ with $z\neq 0$ (which intersect the plane $z=1$ in $3$-dimensional space) and the lines $[x,y,0]$ (which are parallel to the plane $z=1$ in $3$-dimensional space). The former can be identified by homogeneity with the lines $[x,y,1]$, so that each line determines a point in the affine plane and conversely each point in the affine plane determines a line. Thus, the conic section we started with is the intersection of the cone in projective space with the affine plane $z\neq0$.
The latter points, of the form $[x,y,0]$ form the so-called line at infinity around our affine plane. The reason for this lies in how lines in the affine plane interact with points in projective space. A line in the affine plane consists of the solutions $(x,y)$ to an equation
$$ax+by=c$$
Homogenizing this equation gives a homogeneous equation
$$ax+by=cz$$
whose solutions $[x,y,z]$ determine a line in the projective plane. The line in the affine plane is the intersection of the line in the projective plane with the affine plane $z\neq0$, i.e., with the points of the projective plane with homogeneous coordinates $[x,y,1]$.
On the other hand, the intersection of the line in the projective plane with the line at infinity is the intersections with points of the projective plane with homogeneous coordinates $[x,y,0]$. There is only one such intersection point, and that is the point $[-b,a,0]$ on the line at infinity. Since two lines in the affine plane are parallel if and only if their ratios $a:b$ are the same, we see that all lines in the affine plane parallel to the line given by $ax+by=c$ intersect the line at infinity at $[-b,a,0]$. Hence, the points $[x,y,0]$ on the line at infinity are precisely the point at infinity in which parallel lines intersect!
Actual question
It is natural to ask where does the conic section
$$Ax^2+Bxy+Cy^2+Dx+Ey+F=0$$
intersect the line at infinity? Plugging in $z=0$ in the homogenized equation, we arrive at he solutions $[x,y,0]$ of the homogenized quadratic
$$Ax^2+Bxy+Cy^2=0$$
Presuming we are not working in characteristic $2$, we can compute explicitly the solutions. In the case of real coefficients, the discriminant $B^2-4AC$ being positive, $0$, or negative corresponds to having two intersection points at infinity (hyperbola), one intersection point at infinity (parabola), or no intersection points at infinity (ellipse). Furthermore (still in the case of real coefficients), if the intersection points are actually $[1,\pm i,0]$ then the conic has to be circle (these are known as the circular points at infinity). In particular, this exhibits the fact that all conic sections are uniquely determined by $5$ points as a generalization of the fact that a circle is uniquely determined by $3$ points since a circle is a conic section that has to go through the circular points at infinity.
My question is whether we can give a definition of focus, directrix, and eccentricity using only the algbera-geometric structure of the projective plane, and not the usual metric definition. In particular, since the eccentricy governs (for real coefficients) whether the type of conic, it might be computable from the points at infinity.
REPLY [3 votes]: In fact, a conic has 4 foci. We can see this if we look at a canonical ellipse,which is wide and short, and start making it smaller in the direction of the x-axis. The two foci get closer, until we reach a circle when they collapse to one point. Then, if we continue they start to have a different trajectory - up and down. This cannot be, and what really happens is that two foci escape to the complex part of the plane, while the other two arrive from it.
A purely geometro algebraic definition of foci (well not exactly, because it depends on the projective coordinates you chose!) is the followin:
take the points I=[1:i:0] and J=[1:-i:0], and a conic C. By bezout's theorem there are two lines through I that are tangent to C and likewise with J. The two pair of lines intersect at 4 points, which are precisely the foci.
Remark: The fact that there are two tangents from a point to C does not follow immediately from Bezout's, "tangency" is not a linear condition. However, it does follow if first we use projective duality with respect to C.<|endoftext|>
TITLE: probability of sorted array with duplicate numbers
QUESTION [7 upvotes]: Suppose I have a sequence of n numbers
{a1,a2,a3,...an} where some of the numbers are repeated. What is the probability that the sequence is sorted?
REPLY [3 votes]: Usually, if the numbers were not repeated the probability would be 1/n! where n! is the number of possible sorted combination while just one is the correct one. As the number can be repeated, the number of each repeated numbers (for example {2,2,...,2}) can be between 1 to n and the number of different repeated numbers can be between 2 and n/2 (for example {1,1,2,2,3,3,4,4, ... ,n/2,n/2}). So the probability will be {C(n, 1) + C(n, 2) + ... + C(n, n)} / n!.{C(n/2, 2) + C(n/2, 3) + C(n/2, n/2)}<|endoftext|>
TITLE: Probability of one Poisson variable being greater than another
QUESTION [5 upvotes]: Given two Poisson distributions with different λ values, if each were to produce a single random variable, is there closed-form expression for calculating the probability of one random variable being greater than the other?
REPLY [6 votes]: Take two poisson random variables $A$ and $B$ with means $\lambda_A$ and $\lambda_B$ respectively. We see that
$$P(A > B) = \sum_{k = 0}^{\infty} P(A > B | B = k)P(B = k)$$
$$=\sum_{k=0}^{\infty} P(A \geq k + 1)P(B = k)
= \sum_{k= 0}^{\infty} \left(\sum_{l=k+1}^{\infty} \frac{\lambda_A^{l}
e^{-\lambda_A}}{l!} \right)\frac{\lambda_B^k e^{-\lambda_B}}{k!}.$$
Generally, this is difficult to calculate for a general result, but the key idea behind this is to condition on the value of one of the variables.<|endoftext|>
TITLE: Fermat's Little Theorem and Euler's Theorem
QUESTION [7 upvotes]: I'm having trouble understanding clever applications of Fermat's Little Theorem and its generalization, Euler's Theorem. I already understand the derivation of both, but I can't think of ways to use them in problems that I know I must use them (i.e. the question topic is set).
Here are two questions I had trouble with trying to use FLT and ET:
Find the 5th digit from the rightmost end of the number $N = 5^{\displaystyle 5^{\displaystyle 5^{\displaystyle 5^{\displaystyle 5}}}}$.
Define the sequence of positive integers $a_n$ recursively by $a_1=7$ and $a_n=7^{a_{n-1}}$ for all $n\geq 2$. Determine the last two digits of $a_{2007}$.
I managed to solve the second one with bashing and discovering a cycle in powers of 7 mod 1000, and that may be the easier path for this particular question rather than using ET. However, applying ET to stacked exponents I believe is nevertheless an essential concept for solving more complex questions like number 1 that I wish to learn. It would be helpful if I could get hints on using ET in those two problems and a general ET approach to stacked exponents and its motivation.
REPLY [4 votes]: Walking through the first problem, we effectively need to find $a \equiv 5^{5^{5^{5^{5}}}} \bmod 10^5$. This splits easily into finding $a_1 \equiv a \bmod 2^5$ and $a_2 \equiv a \bmod 5^5$ which can then be re-united with the Chinese remainder theorem.
The order of $5 \bmod 32$ divides $\phi(32)=16$ (and actually we could say it divides $\lambda(32) = 8$ due to the Carmichael function). Because $16$ is a power of two (so the order of every number will also be a power of two) we can quickly square repeatedly: $(5\to25\equiv-7\to49 \equiv17\to 289\equiv 1)$ to find that the order of $5$ is in fact $8$. So we need to find:
$$ b \equiv 5^{5^{5^{5}}} \bmod 8 \\
\text{which will give:}\qquad a_1\equiv 5^b \bmod 32$$
Next step; the order of $5 \bmod 8$ is easily seen to be $2$, and we can note that the remaining exponent is odd. Stepping back down the exponent ladder,
$$ b \equiv 5^\text{odd} \equiv 5 \bmod 8 \\
\text{and }\qquad a_1\equiv 5^5 \equiv 21 \bmod 32$$
Also we have $a_2\equiv a \equiv 0 \bmod 5^5$. As it happens $5^5=3125\equiv 21 \bmod 32$, so $a\equiv 3125 \bmod 10^5$ and the requested digit is $0$.
Note that this is true for $5$s in a tower of exponents of height $3$ or more; and this is inevitable behaviour, that the values in the exponent tower not very far up don't have any effect to the final modular result. In the second problem the intimidation factor of a tower of exponents $2007$ high is removed by this knowledge; only the bottom few make a difference.<|endoftext|>
TITLE: Integral ${\large\int}_0^{\pi/2}\frac{x\,\log\tan x}{\sin x}\,dx$
QUESTION [9 upvotes]: Could you please help me to find closed form expressions for the following definite integrals:
$$I_1=\int_0^{\pi/2}\frac{x\,\log\tan x}{\sin x}\,dx\approx0.3606065973884796896...$$
$$I_2=\int_0^{\pi/3}\frac{x\,\log\tan x}{\sin x}\,dx\approx-0.845708026471324676...$$
REPLY [7 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,\mathrm{Li}_{#1}}
\newcommand{\mc}[1]{\,\mathcal{#1}}
\newcommand{\mrm}[1]{\,\mathrm{#1}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
Hereafter, the
$\ds{\ln}$ branch-cut 'sits' along the 'negative $\ds{x}$ axis': Namely,
$\ds{-\pi < \mrm{arg}\pars{z} < \pi}$.
\begin{align}
\color{#f00}{I_{1}} & \equiv
\color{#f00}{\int_0^{\pi/2}{x\ \ln\pars{\tan\pars{x}} \over \sin\pars{x}}
\,\dd x} =
\Re\int_{\verts{z}\ =\ 1 \atop
{\vphantom{\huge A}0\ <\ \mrm{arg}\pars{z}\ <\ \pi/2}}
{-\ic\ln\pars{z}\ln\pars{-\ic\bracks{z^{2} - 1}/\bracks{z^{2} + 1}} \over
\pars{z^{2} - 1}/\pars{2\ic z}}\,{\dd z \over \ic z}
\\[5mm] & =
-2\,\Im\int_{\verts{z}\ =\ 1 \atop
{\vphantom{\huge A}0\ <\ \mrm{arg}\pars{z}\ <\ \pi/2}}\quad\quad
\ln\pars{z}\ln\pars{{1 - z^{2} \over 1 + z^{2}}\,\ic}\,{\dd z \over 1 - z^{2}}
\\[5mm] & =
2\,\Im\int_{1}^{0}
\ln\pars{y\ic}\ln\pars{{1 + y^{2} \over 1 - y^{2}}\,\ic}
\,{\ic\,\dd y \over 1 + y^{2}} +
2\,\Im\int_{0}^{1}
\ln\pars{x}\ln\pars{{1 - x^{2} \over 1 + x^{2}}\,\ic}
\,{\dd x \over 1 - x^{2}}
\\[5mm] & =
-2\int_{0}^{1}\ln\pars{y}\ln\pars{1 + y^{2} \over 1 - y^{2}}
\,{\dd y \over 1 + y^{2}} -
2\int_{0}^{1}{-\pars{\pi/2}^{2} \over 1 + y^{2}}\,\dd y +
2\int_{0}^{1}{\ln\pars{x}\pars{\pi/2} \over 1 - x^{2}}\,\dd x
\tag{1}
\end{align}
Since
$\ds{\int_{0}^{1}{\dd y \over 1 + y^{2}} = {\pi \over 4}}$ and
$\ds{\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x = -\,{\pi^{2} \over 8}}$, the last two terms don't yield any
contribution to the final result. The whole contribution is coming from the first integral in $\ds{\pars{1}}$. Namely,
\begin{align}
\color{#f00}{I_{1}} & \equiv
\color{#f00}{\int_0^{\pi/2}{x\ \ln\pars{\tan\pars{x}} \over \sin\pars{x}}
\,\dd x} =
-2\int_{0}^{1}\ln\pars{y}\ln\pars{1 + y^{2} \over 1 - y^{2}}
\,{\dd y \over 1 + y^{2}}
\\[5mm] & =
2\int_{0}^{1}{\ln\pars{y}\ln\pars{1 + y} \over 1 + y^{2}}\,\dd y +
2\int_{0}^{1}{\ln\pars{y}\ln\pars{1 - y} \over 1 + y^{2}}\,\dd y
-4\,\Re\int_{0}^{1}{\ln\pars{y}\ln\pars{1 + y\ic} \over 1 + y^{2}}\,\dd y
\\[5mm] & =
\mrm{f}\pars{1} + \mrm{f}\pars{-1} - 2\mrm{f}\pars{\ic}\tag{2}
\end{align}
where
\begin{equation}
\mrm{f}\pars{a} \equiv
2\,\Re\int_{0}^{1}{\ln\pars{y}\ln\pars{1 + ay} \over 1 + y^{2}}\,\dd y -
\int_{0}^{1}{\ln^{2}\pars{y} \over 1 + y^{2}}\,\dd y
\end{equation}
$\ds{\mrm{f}\pars{a}}$ can be rewritten in the following form:
\begin{align}
\mrm{f}\pars{a} & =
\Re\int_{0}^{1}{\ln^{2}\pars{1 + ay} \over 1 + y^{2}}\,\dd y -
\Re\int_{0}^{1}\ln^{2}\pars{y \over 1 + ay}\,{\dd y \over 1 + y^{2}}
\end{align}
In the first integral we make the substitution
$\ds{\ {1 \over 1 + ay}\ \mapsto\ y\ }$ while in the second
$\ds{\ {y \over 1 + ay}\ \mapsto\ y.\ }$ $\ds{\mrm{f}\pars{a}}$ is reduced to:
\begin{align}
\mrm{f}\pars{a} & =
-\Re\int_{1}^{1/\pars{1 + a}}
{a\ln^{2}\pars{y} \over \pars{1 + a^{2}}y^{2} - 2y + 1}\,\dd y -
\Re\int_{0}^{1/\pars{1 + a}}{\ln^{2}\pars{y} \over
\pars{1 + a^{2}}y^{2} - 2ay + 1 }\,\dd y
\end{align}
Those integral are easily reduced to the form
$\ds{\pars{~y_{0}\ \mbox{is a constant}~}}$
$\ds{\int{\ln^{2}\pars{y} \over y_{0} - y}\,\dd y}$ which involves $\ds{\Li{\mathrm{s}}}$ functions after integration by parts. 'Partial' results
are straightforward given by:
\begin{equation}
\left\lbrace\begin{array}{rcl}
\ds{\mrm{f}\pars{1}} & \ds{=} & \ds{%
-\int_{1}^{1/2}
{\ln^{2}\pars{y} \over 2y^{2} - 2y + 1}\,\dd y -
\int_{0}^{1/2}{\ln^{2}\pars{y} \over 2y^{2} - 2y + 1 }\,\dd y}
\\[5mm] & \ds{=} &
\ds{\int_{1/2}^{1}{\ln^{2}\pars{y} \over 2y^{2} - 2y + 1 }\,\dd y -
\int_{0}^{1/2}{\ln^{2}\pars{y} \over 2y^{2} - 2y + 1 }\,\dd y}
\\[5mm] & \ds{=} & \ds{%
{7 \over 64}\,\pi^{3} - 6\,\Im\Li{3}\pars{1 + \ic \over 2} - 4G\ln\pars{2} +
{3 \over 16}\,\pi\ln\pars{2}^{2}}
\\[1mm]&& G\ \mbox{is the}\ Catalan\ Constant.
\\[1cm]
\ds{\mrm{f}\pars{-1}} & \ds{=} & \ds{%
\int_{1}^{\infty}
{\ln^{2}\pars{y} \over 2y^{2} - 2y + 1}\,\dd y -
\int_{0}^{\infty}{\ln^{2}\pars{y} \over 2y^{2} + 2y + 1 }\,\dd y}
\\[5mm] & \ds{=} & \ds{%
-\,{5 \over 64}\,\pi^{3} + 2\,\Im\Li{3}\pars{1 + \ic \over 2} -
{1 \over 16}\,\pi\ln\pars{2}^{2}}
\\[1cm]
\ds{\mrm{f}\pars{\ic}} & \ds{=} & \ds{%
\Im\int_{1}^{\ol{r}}\,\,
{\ln^{2}\pars{y} \over 1 - 2y}\,\dd y -
\Re\int_{0}^{\ol{r}}\,\,{\ln^{2}\pars{y} \over 1 - 2\ic y}\,\dd y\,\qquad\qquad
\fbox{$\ds{\ r \equiv \half + \half\,\ic\ }$}}
\\[5mm] & \ds{=} & \ds{%
-\,{5 \over 64}\,\pi^{3} + 2\,\Im\Li{3}\pars{1 + \ic \over 2} -
G\ln\pars{2} - {1 \over 16}\,\pi\ln\pars{2}^{2}}
\end{array}\right.
\end{equation}
With these 'partial' results we arrive to the $\ds{\ul{final\ one}}$
$\ds{\pars{~\mbox{see expression}\ \pars{2}~}}$:
\begin{equation}
\begin{array}{|rcl|}\hline\mbox{}
\\
\ds{\quad\color{#f00}{I_{1}}} & \ds{\equiv} &
\ds{\color{#f00}{\int_0^{\pi/2}{x\ \ln\pars{\tan\pars{x}} \over \sin\pars{x}}\,\dd x} =
{3 \over 16}\,\pi^{3} - 8\,\Im\Li{3}\pars{1 + \ic \over 2} - 2G\ln\pars{2} +
{1 \over 4}\pi\ln^{2}\pars{2}\quad}
\\[3mm]
& \ds{\approx} &
\ds{0.36060659738847968961683946939114446475244886455702}
\\[5mm]
&& G\ \mbox{is the}\ Catalan\ Constant.
\\ \mbox{}\\ \hline
\end{array}
\end{equation}<|endoftext|>
TITLE: Any hints on how to prove that $\ln{1\over 2\sin\left({90\over \pi}\right)}=\sum_{n=1}^{\infty}{(-1)^{n-1}B_{2n}\over 2n(2n)!}$?
QUESTION [7 upvotes]: How do you prove that
$$
\ln\left(1 \over 2\sin\left(1/2\right)\right)
=
\sum_{n = 1}^{\infty}{\left(-1\right)^{n - 1}\,B_{2n} \over
2n\left(2n\right)!}\ ?\tag1
$$
where $B_{2n}$ is a Bernoulli number
Any hints?
REPLY [8 votes]: Hint. One may recall that
$$
\cot x = \frac1x+\sum_{n=1}^{\infty} B_{2n} \frac{(-1)^n 4^n x^{2n-1}}{(2n)!},\quad |x|<\frac{\pi}2. \tag1
$$ We are allowed to integrate the power series termwise obtaining
$$
\log( \sin x)=\log x+\sum_{n=1}^{\infty} B_{2n} \frac{(-1)^n 4^n x^{2n}}{2n(2n)!},\quad |x|<\frac{\pi}2, \tag2
$$ then taking $x=\dfrac12$ gives the result.
Remark. We may obtain $(1)$ from observing that the even function $(-2\pi,2\pi) \ni z \mapsto z\cot z$ rewrites
$$ z\cot z=z\cdot \frac{\cos z}{\sin z}=iz\cdot\frac{e^{iz}+e^{-iz}}{e^{iz}-e^{-iz}}=iz\cdot\frac{e^{2iz}+1}{e^{2iz}-1}=iz+\frac{2iz}{e^{2iz}-1}$$
and from using the classic generating function of the Bernoulli numbers
$$
\sum_{k=0}^{\infty} B_k \frac{z^k}{k!} = \frac{z}{e^z-1}, \quad |z|<2\pi,
$$ $B_0=1,\,B_1=-1/2, \,\ldots.$<|endoftext|>
TITLE: Is advanced college math (eg analysis, abstract/linear algebra, topology) supposed to be as intuitive as elementary math?
QUESTION [15 upvotes]: So I don't know if I'm not smart enough for math, but lately, it seems to me as if some advanced topics are just too unintuitive in my opinion.
For example, I have no idea what eigenvalues, jacobians or manifolds really are, and it's a similar thing with most of abstract algebra (or at least I've never been told how mathematicians came up with these kinds of concepts -I don't even know what motivated the formulation of a matrix-).
And it's not just algebra. Even though I did pretty well (intuition-wise) in one-variable calculus, I have absolutely no idea why the solutions to differential equations make sense, or the notion behind differential operators.
So, I think one of these things is going on here:
a) I'm not smart enough to gain intuition of these concepts on my own;
b) It is complicated, but possible, to understand intuitively these concepts. Problem is this educational system doesn't put too much emphasis on deep comprehension;
c) Nobody knows very well these areas of math, we just apply rules and we're deeply boggled by how far we can go with them.
Which one of these is the case? I could really use some guidance.
REPLY [3 votes]: All of these topics can be understood intuitively, they just don't tend to be taught intuitively. I remember when starting linear algebra, I already knew what a vector is so it made sense to me, but I pity someone who was trying to understand what a vector is from the axioms of a vector space, which were what was being written onto the blackboard in front of us.
Certainly the topics you mention can be intuitively understood. Intuitively, a manifold is an object of lower dimensionality than the space it's in, so a piece of paper for instance is a 2d manifold in 3d space, or a string is a 1d manifold. Intuitively, if you picture a matrix as representing a transformation of space, and apply it to a unit circle/sphere, then you get an ellipse/ellipsoid. The eigenvectors are the principle axes of the ellipse/ellipsoid and the eigenvalues are the lengths of those axes. The Jacobian I know less about but basically it's the derivative, except of something involving lots of variables, so it has lots of values.
There's some cynical reasons why it's not explained intuitively including that unlike teachers in school, lecturers aren't usually there primarily to teach, and that it's a higher level course so expectations and demands are higher. But a nicer reason is that the power of maths comes from its ability to apply to many things beyond where it was first discovered. Any intuitive understanding of something is limited to a particular domain or application, and if all you're taught is that domain then you don't appreciate or understand the generality of it. Of course in practice, teaching something in a practical context first then generalising would probably be more effective.<|endoftext|>
TITLE: Evaluating $\int_{0}^{\infty}{\sin(x)\sin(2x)\sin(3x)\ldots \sin(nx)\sin(n^{2}x) \over x^{n + 1}}\,dx $
QUESTION [15 upvotes]: How can we calculate
$$
\int_{0}^{\infty}{\sin\left(x\right)\sin\left(2x\right)\sin\left(3x\right)\ldots
\sin\left(nx\right)\sin\left(n^{2}x\right) \over x^{n + 1}}\,\mathrm{d}x ?
$$
I believe that we can use the Dirichlet integral
$$
\int_{0}^{\infty}{\sin\left(x\right) \over x}\,\mathrm{d}x =
{\pi \over 2}
$$
But how do we split the integrand?
REPLY [8 votes]: We can use contour integration to show that $$\int_{0}^{\infty} \frac{\sin(x) \sin(2x) \cdots \sin(nx) \sin(n^{2}x)}{x^{n+1}} \, dx = \frac{\pi n!}{2} . $$
Consider the complex function $$f(z) = \frac{\sin(z) \sin(2z) \cdots \sin(nz) e^{in^{2}z}}{z^{n+1}}. $$
If we can argue that the magnitude of the numerator is bounded in the upper half-plane, then it follows from the estimation lemma that $\int f(z ) \, dz$ vanishes along the upper half of the circle $|z|=R$ as $R \to \infty$.
Notice that numerator of $f(z)$ can be expressed as $$\frac{e^{in^{2}z}}{(2i)^{n}}\prod_{k=1}^{n} \left(e^{ikz}-e^{-ikz} \right) . $$
Since $n^2 \ge \frac{n(n+1)}{2} = \sum_{k=1}^{n} k$ , this is a linear combination of exponential functions of the form $e^{i a z}$, where $a \ge 0$. And in the upper half-plane, the magnitude of such exponential functions never exceed $1$.
Therefore, by integrating $f(z)$ around a indented contour that consists of the real axis and the large semicircle above it, we get $$ \text{PV} \int_{-\infty}^{\infty} f(x) \, dx - \pi i \, \text{Res} [f(z), 0] = 0 \, , $$ where
$$ \begin{align} \text{Res} [f(z), 0] &= \lim_{z \to 0} z \, \frac{\sin(z) \sin(2z) \cdots \sin(nz) e^{in^{2}z}}{z^{n+1}} \\ &= \lim_{z \to 0} \frac{\sin(z)}{z} \frac{\sin(2z)}{z} \cdots \frac{\sin(nz)}{z} \, e^{inz^{2}} \\ &=1 \cdot 2 \cdots n \cdot 1 \\ &=n!. \end{align}$$
Equating the imaginary parts on both sides of the equation, we get $$\int_{-\infty}^{\infty} \frac{\sin(x) \sin(2x) \cdots \sin(nx) \sin(n^{2}x)}{x^{n+1}} \, dx = \pi n! . $$
And since the integrand is even, it follows that $$\int_{0}^{\infty} \frac{\sin(x) \sin(2x) \cdots \sin(nx) \sin(n^{2}x)}{x^{n+1}} \, dx = \frac{\pi n!}{2} .$$<|endoftext|>
TITLE: Prove: In a Triangle, $II_1 = a\cdot \sec \frac{A}{2}$
QUESTION [5 upvotes]: Prove that $II_1 = a\cdot \sec \dfrac{A}{2}$.
$I$ is center of incircle, $I_1$ is center of excircle.
What I did is :
Drop $ID \perp AB$, & $I_1F \perp AF$ at $F$ So $ID\parallel I_1F$
$\dfrac{AI}{II_1} = \dfrac{AD}{DF}$
$II_1 = \dfrac{AI \cdot DF}{AD}$
$II_1 = DF \cdot \sec \dfrac{A}{2}$
What should I do further, or provide me another approach.
REPLY [3 votes]: Let $T$ be the midpoint of $II_1$. Show that $\angle BTC=\pi-\angle BAC$ (i.e., $ABTC$ is a cyclic quadrilateral). Prove that $T$ is the center of the circumcircle of the cyclic quadrilateral $IBI_1C$. The result follows immediately.<|endoftext|>
TITLE: What is a projective plane? How is it different from an affine plane?
QUESTION [9 upvotes]: I came across the definition in the book titled Elliptic Curves by Anthony W Knapp, couldn't understand it so looked online, which just confused me more.
I'm looking for an explanation in the context of curves in projective plane/space.
REPLY [8 votes]: You get a projective plane from an affine plane if you consider “points at infinity” as regular elements of your plane. This simplifies a number of situations, for example two distinct lines will always intersect in a unique point, the special case of parallel lines vanishes. Parallel lines simply intersect at infinity.
Expressed in coordinates, you add one coordinate. So a point $(x,y)$ in the usual Cartesian plane would be represented as $[x,y,1]$ or any multiple thereof. That's called a homogeneous coordinate vector. So in fact you are no longer dealing in vectors, but strictly speaking in equivalence classes of vectors. Most of the time authors will use the same notation for vectors and for equivalence classes, and rely on context to tell you which is which in those cases where it makes a difference. To convert back, a homogeneous coordinate vector $[x,y,z]$ corresponds to a Cartesian vector $(x/z,y/z)$. If $z=0$, this would be undefined; those are the points at infinity. The vector $[0,0,0]$ has to be excluded, since it would otherwise belong to all equivalence classes. The null vector does not represent any point in the projective plane.
In terms of curves, you might want to make certain that all your equations are homogeneous, i.e. have the same degree in each term (for each geometric object). So for example the equation $7x^2 + 5y = 3$ would not be homogeneous: if you have a vector which solves this, then take twice that vector, you may end up with another representative of the same point which fails the equation. $7x^2 + 5yz = 3z^2$ on the other hand would be a homogeneous equation.
See also Difference between Projective Geometry and Affine Geometry which discusses that difference from a different point of view.<|endoftext|>
TITLE: Can we define flat connection on any given smooth manifold?
QUESTION [5 upvotes]: For example, a sphere $S^2$ in $\mathbf{R}^3$ is apparently not flat with respect to the Euclidean connection, but can we define a flat connection and thus with affine charts on $S^2$?
REPLY [11 votes]: There are topological obstructions to a vector bundle admitting a flat connection: most simply, by Chern-Weil theory the real Pontryagin classes of such a bundle must all vanish. So, for example, any closed $4$-manifold with nonzero signature, such as $\mathbb{CP}^2$, does not admit a flat connection.
Also by Chern-Weil theory, or by the Chern-Gauss-Bonnet theorem (which is stated on Wikipedia for the Levi-Civita connection but in fact holds for any connection), if an oriented vector bundle admits a flat connection then the real Euler class must also vanish, meaning that the Euler characteristic must be zero. So it follows that $S^2$ also does not admit a flat connection.<|endoftext|>
TITLE: Languages acceptable with just a single final state
QUESTION [5 upvotes]: For a given regular language $L$ we can always find a corresponding automaton with exactly one initial state, this is quite a common result and in most textbooks even non-deterministic automata are just allowed to have a single start state.
Now I am curious under what conditions is a single final state sufficient. Of course, sometimes a single final state is not enough (even for non-deterministic automata), for example for the language $L = \{a, bb\}$ or $L = a \cup bb^{\ast}$ (of course under the assumption that $\varepsilon$-transition are not allowed).
I guess if we allow multiple initial states in non-deterministic automata, then we can always find a non-deterministic automata with a single final state (it might have multiple start states). For a proof, if $L$ is regular, then let $\mathcal A$ be an accepting automaton for $L^R$ (i.e. the mirrored language) with a single initial state $q_0$. Then reverse all transitions and declare $q_0$ to be its single final state, and all original final states as initial states, and we have an automaton for $(L^R)^R = L$ which has just a single final state.
So is this observation correct, or are there automata for which we always need more than one final state, even if we allow multiple start states. And also could the languages which could be accepted with just a single final state (in the deterministic, and in the non-deterministic with a single initial state) somehow characterised?
Also note that $L = X^{\ast}0X$ for $X = \{0,1\}$ could not be accepted by a DEA with a single final state, but by an NEA with a single final state and a single initial state.
EDIT: A straightforward characterisation for the deterministic case, as the number of nerode right-congruence classes whose union is $L$ is an upper bound for the number of final states (as they could not be further merged), we have that $L$ could be accepted by such an automaton iff it is itself an equivalence class. This also shows that by adding final states we could not gain anything in the sense that the automaton gets smaller.
REPLY [5 votes]: According to Eilenberg [1, Chap. IV, Prop. 1.1], the following result holds:
Proposition. For any nonempty subset $L$ of $A^*$, the following conditions are equivalent:
for all $u, v \in L$, $u^{-1}L = v^{-1}L$,
the minimal automaton of $L$ has a single final state,
$L$ is recognized by a deterministic automaton with a single final state that is accessible.
[1] S. Eilenberg, Automata, Languages and Machines, Volume A, Academic Press (1974)
See also my answer to the related question (N)DFA with same initial/accepting state(s) on cstheory.<|endoftext|>
TITLE: Is $\Delta C_c^\infty$ a dense subset of $L^p(\mathbb{R}^d)$?
QUESTION [5 upvotes]: I'm struggling to obtain some density result. It is well known that $C^\infty_c(\mathbb{R}^d)$ is dense in $L^p (\mathbb{R}^d)$ for $1\leq p<\infty$.
It is well known that for $\lambda>0$, $(\lambda -\Delta) C_c^\infty$ is dense in $L^p(\mathbb{R}^d)$ for $1\leq p <\infty$.
Proof of this fact requires maximum principle and Hahn-Banach theorem, and Riesz representation theorem. (Stated in Krylov's Elliptic and Parabolic equation in Sobolev spaces)
I'm wondering whether $\Delta C_c^\infty(\mathbb{R}^d)$ is dense in $L^p (\mathbb{R}^d)$.
I tried by using Newtonial potential, but I fail to obtain the desired result because I found $C^\infty$ function, but I cannot make a sequence of $C^\infty$ functions with compact support. Even I tried cut-off method, I cannot guarantee the fact.
REPLY [3 votes]: No for $p=1$, yes for $10$ define $$K_\delta(x)={\delta^d}K\left( \delta x\right).$$If $\phi\in C^\infty_c$ then $||K_\delta*\phi||_1\le||K||_1||\phi||_1$ and $||K_\delta*\phi||_\infty\le c\delta^d$, hence $$||K_\delta*\phi||_p\to0\quad(\delta\to0).$$
Now say $K=c\chi_{B(0,1)}$, where $c$ is chosen so that $\int K=1$. Say $G$ is the Green's function. Then $$K_\delta*G(x)=G(x)\quad(|x|>1/\delta),$$so if $\phi\in C^\infty_c$ and we set $$\psi_\delta=(\phi-\phi*K_\delta)*G$$then $\psi_\delta\in C^\infty_c$. So $\phi-\phi*K_\delta\in \Delta C^\infty_c$, and if $||f-\phi||_p<\epsilon$ then $||f-(\phi-\phi*K_\delta)||_p<2\epsilon$ for small enough $\delta$.<|endoftext|>
TITLE: How to write a set with an index
QUESTION [7 upvotes]: I'd like to write a set $\{x_1, x_2, ..., x_n\}$ in a simple way.
What is a popular way?
In my high school, I wrote it as $\{x_i\}_{i=1}^{n}$. Is it a correct way?
REPLY [4 votes]: in the set theory we have $\{x_1,...,x_n\}=\{x_i\mid 1\leq i\leq
n\}=\cup_{1\leq i\leq n}\{x_i\}$ so is the set of $n$ elements,
and the set
$\{x_i\}^{n}_{i=1}=\{x_1\}\{x_2\}\cdot\cdot\cdot\{x_n\}=\{(x_1,\cdot\cdot\cdot
x_n)\}$ is a set of one element.<|endoftext|>
TITLE: What happens when you add $x$ to $\frac{1}{3}x$?
QUESTION [11 upvotes]: I am dealing with an equation that requires me to add $x$ to $\frac{1}{3}x$:
$x + \frac{1}{3}x$ = ??
I know this might be simple to any of you on this site, because you are all asking questions with symbols I have never seen, but this is confusing to me.
I guess one way of thinking about this is - You are adding $x$ to $yx$, right?
Or just adding another $\frac{1}{3}$?
The complete equation that I am working on is [- don't laugh at its simplicity ;)]:
$\frac{2}{3}b + 5 = 20 - b$
So, when worked out... I got:
$\frac{2}{3}b + b = 15$
And this is where I get stuck.
REPLY [6 votes]: Another approach, which at this time does not appear to have been mentioned, is to "clear fractions" from your equation. You can do this by multiplying both sides of the equation by a number that results in no fractions being left. In the case of your equation, multiply both sides by $3$:
$$\frac{2}{3}b \; + \; b \; = \; 15$$
(multiply both sides by $3$)
$$3\left(\frac{2}{3}b \; + \; b\right) \; = \; 3\left( 15 \right)$$
$$2b \; + \; 3b \; = \; 45$$
$$5b \; = \; 45$$
Now solve for $b$ by dividing both sides by $5$ to get $\;b = 9.$
Of course, you could also multiply both sides by $6$ or multiply both sides by $30,$ but $3$ is the most sensible choice because $3$ is the smallest number that does the job.
Note that if we had gotten $\;6b = 45\;$ at the end, the final answer would involve fractions. However, what this method does is keep the fractions at bay until the very end so you don't have to deal with them until the end.
Other examples:
$$\frac{2}{3}b \; + \; \frac{1}{4}b \; = 18 \;\;\;\; \text{(multiply both sides by} \; 12)$$
$$\frac{2}{3}b \; + \; \frac{1}{6}b \; = 18 \;\;\;\; \text{(multiply both sides by} \; 6)$$
$$\frac{2}{3}b \; + \; \frac{1}{4}b \; = \frac{5}{8} \;\;\;\; \text{(multiply both sides by} \; 24)$$
What you want to multiply both sides by is a number that all the denominators will divide into. If you can't think of such a number very quickly, then you can always get such a number by multiplying all the denominators together.
However, this method fails when the coefficients are not fractions or integers, such as
$$\sqrt{2}\,b \; + \; b \; = 18$$
or
$$\pi \, b \; + \; 4b \; = 18$$
In these cases, some of the other methods described here can be used (e.g. factor out $b$ and then divide both sides by what $b$ is being multiplied by).<|endoftext|>
TITLE: Prove that for any polynomial $P(x)$ there exist polynomials $F(x)$ and $G(x)$ such that $F\left(G(x) \right)-G\left(F(x) \right)=P(x)$
QUESTION [6 upvotes]: Prove that for any polynomial $P(x)$ there exist polynomials $F(x)$ and $G(x)$ such that $\forall x \in \mathbb R$,
$$F\big(G(x) \big)-G\big(F(x) \big)=P(x)\,.$$
My work so far:
Let $G(x)=x+1$. Then $$F(x+1)-F(x)=P(x)+1\,.$$
I need help here.
REPLY [2 votes]: Hint $\ $ Just like the derivative, the linear operator $\, D f(x) = f(x\!+\!1) - f(x) \,$ acts on polynomials by decreasing the degree by $1,\,$ since $\,D x^n = (x+1)^n-x^n = c_n x^{n-1} +g(x)\,$ with $\,c_n\neq 0,\,\deg g < n-1$. For any such linear operator one can solve equations of the form $\, D f(x) = g(x)\,$ for given polynomial $g$ by using undetermined coefficients and induction, e.g.
$$ g_n x^n+\cdots = D (f_{n+1} x^{n+1} + \cdots) = c_{n+1} f_{n+1} x^n + \cdots\,\Rightarrow\, f_{n+1} = g_n/c_{n+1}$$
Substituting that value for $\,f_{n+1}\,$ we reduce to a problem with smaller degree $\,g$<|endoftext|>
TITLE: hard integral problems to solve
QUESTION [5 upvotes]: I'm practicing harder integration using techniques of solving with special functions
I have difficulties with these two hard integrals; don't even know how to start,
$$\int_0 ^\infty x^p e^{-\frac{\theta}{x}+Bx}dx$$
where $\theta,B>0$
$$pv\int_0 ^\infty \frac{x^p e^{\cos{\theta x}} \cos(\frac{\pi p}{2} - \sin\theta x)}{1-x^2}dx$$
please help me to start of giving your solutions! thank you so much and have a good day/night
REPLY [4 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[2]{\,\mathrm{Li}_{#1}\left(\,{#2}\,\right)}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
With $\ds{\theta > 0}$ and $\ds{B > 0}$,
$\ds{\int_{0}^{\infty}x^{p}
\exp\pars{-\,{\theta \over x} \color{#f00}{\large -} Bx}\,\dd x =\, ?}$.
Write $\ds{x \equiv \root{\theta \over B}\expo{t}}$ such that
\begin{align}
&\color{#f00}{\int_{0}^{\infty}x^{p} \exp\pars{-\,{\theta \over x} - Bx}\,\dd x}
=
\int_{-\infty}^{\infty}\pars{\root{\theta \over B}\expo{t}}^{p}
\exp\pars{-\root{\theta B}\bracks{\expo{-t} + \expo{t}}}\root{\theta \over B}
\expo{t}\,\dd t
\\[3mm] = &\
\pars{\theta \over B}^{\pars{p + 1}/2}\int_{-\infty}^{\infty}
\expo{\pars{p + 1}t}\exp\pars{-2\root{\theta B}\cosh\pars{t}}\,\dd t
\\[3mm] = &\
\pars{\theta \over B}^{\pars{p + 1}/2}\int_{-\infty}^{\infty}
\braces{\vphantom{\Large A}\cosh\pars{\vphantom{\large A}\bracks{p + 1}t} + \sinh\pars{\vphantom{\large A}\bracks{p + 1}t}}
\exp\pars{-2\root{\theta B}\cosh\pars{t}}\,\dd t
\\[3mm] = &\
2\pars{\theta \over B}^{\pars{p + 1}/2}\int_{0}^{\infty}
\cosh\pars{\bracks{p + 1}t}\exp\pars{-2\root{\theta B}\cosh\pars{t}}\,\dd t
\\[3mm] = &\
\color{#f00}{2\pars{\theta \over B}^{\pars{p + 1}/2}\,
\mathrm{K}_{p + 1}\pars{2\root{\theta B}}}
\end{align}
$\ds{\mathrm{K}_{\nu}\pars{z}}$ is the
Modified Bessel Function of the Second Kind.<|endoftext|>
TITLE: Boundary of a metric space
QUESTION [5 upvotes]: Does it make sense to talk about boundary of a metric space, not of subset?
For example, if $E$ is the metric space consisting of the closed interval $ [ 0,1] $ with the usual metric of $\mathbb{R}$, then is $ \{0,1 \}$ the boundary points of $E$?
I kinda think not, since a boundary point of a set A is defined to be the point where any neighborhood of that point contain some point in A and some point not in A. But in my example above, any neighborhood of the point $0$ and $1$ only contain point in $E$.
I'm new to topology so can anyone enlighten me about this? thank you very much.
Update: Thank you guys for all your answer, it really helps alot. So let me state what I understand about manifold and can you guys correct me if I'm wrong??
I don't know much about general manifold but I only know Euclidean manifold thru the text of Munkres's Analysis on Manifold.
When we say a 1-manifold in $\mathbb{R^3}$, we mean a continuous curve in $\mathbb{R^3}$, is that correct?
let this curve be defined by $f:[0,1] \to \mathbb{R^3}$, and $f$ is a homeomorphism, then by what you guys just state the points $ x_1 = f(0) $ would be a boundary point since for any neighborhood $U$ of $x_1$, there is a neighborhood of the point $0$ that is open in $ \mathbb{H^1}= \{x \in \mathbb{R}: x \geq 0 \}$ that maps homeomorphismly to $U$.
Is that correct? thank you for all your input.
REPLY [3 votes]: Clearly if you regard $E$ as embedded in $\mathbb{R}$, then it has a boundary. But from your comments, I assume you are regarding $E$ as not embedded.
In that case, it is trivial that $E$ is open and closed and hence has no boundary (given the usual definitions - there is a good Wikipedia article on the detail).
But if you are interested in the intuitive concept of boundary as "being on the periphery", rather than the usual definition, then you could still define the boundary in terms of the total ordering inherited from $\mathbb{R}$. In other words you take the boundary to be points $x\in E$ such that there do not exist points $x',x''$ with $x'
TITLE: Appealing axioms incompatible with large cardinal axioms
QUESTION [6 upvotes]: I'm interested to know what are some 'appealing' axioms that are inconsistent with ZFC plus some large cardinal axiom. I saw the question On the contradictory nature of large cardinals & choice-like axioms, which sparked my curiosity. One reason is that in most mathematical practice all elements of a power-set that we construct are in fact in Godel's $L$, where GCH holds. Thus arguably whatever intuitive understanding of the set-theoretic universe $V$ that one may claim to have is actually more like an understanding of $L$, simply because we cannot actually conceive of an uncountable number of things except via proxy, namely the description of the phenomenon of being not countable. GCH ($κ \le λ \le 2^κ \rightarrow κ = λ \lor λ = 2^κ$ for any infinite cardinals $κ,λ$) is inconsistent with ZFC+PFA, and hence it suggests that there are other 'appealing' axiom pairs besides (PFA,GCH) and (measurable, $V=L$) that are mutually inconsistent over ZFC.
Many people have mentioned that it seems unusual that there is a linear ordering among the large cardinal axioms so far considered, but since at some point they contradict natural extensions of AC, I'm curious as to whether this linear ordering is perhaps due to the fact that these axioms are all of roughly the same kind. I had also seen this post on situations in which 'choice' principles fail, and I was told that it is still not known whether Berkeley cardinals are consistent with ZF (without AC). If they are consistent then there would be at least two incompatible kinds of 'appealing' set-theoretic universes, one with strong 'choice' principles and one having 'all sorts' of inner models.
But surely there ought to be a far richer logical landscape? There are infinitely many incomparable theories, so if it is mere consistency ($Π_1$ arithmetical sentences) we are talking about then there should be infinitely many pairwise incomparable or incompatible extensions of ZFC, just like for PRA, where PA (plus names for all the p.r. functions) and PRA+TI($ε_0$) are mutually incomparable extensions but both of which have significant consequences (induction for all formulae, Con(PA) respectively). Also, both of them are in turn interpretable in Z$_2$. In this case the two theories are not incompatible simply because we already had the standard model of $\mathbb{N}$ in mind when we constructed both of them. However, in set theory it seems to me that there is no standard model describable in even natural language, so I see no obstruction to having a wide variety of incompatible extensions of ZFC that are intuitively appealing.
So what other pairs are known? I don't know the literature so I'd appreciate references too. Thanks!
REPLY [4 votes]: My personal favorite - although it seems not to be very popular - is the inner model hypothesis (IMH) and its variants. IMH states that anything that can happen in an inner model, does: specifically, $$\mbox{Any parameter-free $\varphi$ which holds in an inner model of some outer model of $V$,}$$ $$\mbox{already holds in some inner model of $V$.}$$
(Of course the quantification over outer models means that this definition takes place inside some class theory like MK; to make things more ZFC-y, we can restrict attention to inner models of tame class-generic extensions. I believe that most of the results about IMH go through for this "internal" version as well.)
While IMH has large cardinal strength - consistency-wise, it is much stronger than a measurable cardinal - it contradicts even the existence of inaccessible cardinals! Moreover, at least to me IMH is philosophically plausible, if not compelling. See http://arxiv.org/pdf/0711.0680v1.pdf.<|endoftext|>
TITLE: Exponential of Lie Groups.
QUESTION [6 upvotes]: When the exponential map defined a bijection between the group G and their Lie algebra? The only example I know is the Heisenberg group.
REPLY [5 votes]: There is a really cool example that I love, given in a paper by Arsigny, et al. which shows that the symmetric positive definite matrices can be given a Lie group structure, and this is one such example where the exponential map is a bijection from the Lie group to its Lie algebra. Let $SPD(n)$ be the manifold of symmetric positive definite matrices. This can be realized as a manifold both as an open subset of the symmetric matrices $Symm(n)$, and also as a homogeneous space $SPD(n) \approx GL_n(\mathbb{R})/O(n)$ under the action $GL_n(\mathbb{R}) \times SPD(n) \to SPD(n)$ given by $(A, P) \to APA^T$. Using the spectral theorem we can diagonalize any $P \in SPD(n)$ as $P = QDQ^T$ with $D$ diagonal with positive elements and $Q \in O(n)$. Furthermore, the tangent space $T_PSPD(n) \cong Symm(n)$ at each point, hence the tangent vectors are symmetric matrices. The matrix exponential map $\exp:Symm(n) \to SPD(n)$ given by the Taylor series has the convenient computational property that
$$
\exp(P) \;\; =\;\; Q\exp(D) Q^T
$$
and similarly the logarithm map $\log:SPD(n) \to Symm(n)$ defined by the usual Taylor expansion is defined for all matrices in $SPD(n)$ and serves as an inverse for $\exp$.
Now, the part that Arsigny and his collaborators introduced in their paper is how to turn $SPD(n)$ into a Lie group. This can be done by introducing the binary operation $\odot: SPD(n) \times SPD(n) \to SPD(n)$ given by
$$
A\odot B \;\; =\;\; \exp(\log(A) + \log(B)).
$$
One can easily show that this operation is smooth and satisfies all of the conditions to turn $(SPD(n), \odot)$ into a Lie group. What's also exciting too is that in the paper, the authors introduce a scalar multiplication that turns $SPD(n)$ into a vector space.
Update 9/15/2019
I wanted to update this example with a couple more features that will make this analysis a little more complete.
Notice that the Lie group product turns $SPD(n)$ into a commutative Lie group. This implies that the Lie algebra $Symm(n)$ is commutative, and rightfully so the bracket $[\cdot, \cdot]: Symm(n) \times Symm(n) \to Symm(n)$ is trivial. This fact is correctly reflected in the Campbell-Baker-Hausdorff formula:
$$
\log\left (\exp(A) \odot \exp(B)\right ) \;\; =\;\; A+B.
$$
Observe now that the Lie group structure $(SPD(n), \odot)$ is really just the pushforward of vector addition on $Symm(n)$ via $\exp$, and this is afforded to us by the fact that $\exp$ is a global diffeomorphism. We can extend this to a vector space structure on $SPD(n)$ by also pushing forward the scalar multiplication. Arsigny et al. defined the operation $\circledast:\mathbb{R} \times SPD(n) \to SPD(n)$ given by
$$
\lambda\circledast P \;\; =\;\; \exp \left (\lambda \log(P)\right ).
$$
One can show that $\circledast$ satisfies the distributive properties of scalar multiplication and in fact serves as a scalar multiplication on $SPD(n)$.<|endoftext|>
TITLE: Converse of Schur's Lemma in finite dimensional vector spaces
QUESTION [5 upvotes]: I am trying to prove (or disprove) the converse of Schur's Lemma in finite dimensional vector spaces. I am not sure if it holds in this case, but I have tried to apply the idea that proves it in representation theory (see for example Theorem 4.3 here that uses Maschke's theorem or in questions here and here).
The converse of Schur's Lemma in finite dimensional vector spaces is:
Let $V$ be a finite dimensional vector space over the complex numbers. Let $S$ be a set of endomorphisms of $V$ and assume that every endomorphism $A$ of $V$ such that $$AB=BA\text{ for all }B\in S$$ is of the form $\lambda I, \ \lambda\in\mathbb{C}$. Then $V$ is a simple $S$-space.
(*an endomorphism of $V$ is a linear operator from $V$ to $V$
*$V$ is a simple $S$-space if the only $S$-invariant subspaces of $V$ are $V$ itself and the zero subspace)
What I've tried so far:
Assume to the contrary that $V$ is not a simple $S-$space. Then, there exists a subspace $W$ of $V$ such that $W\not =\{0\}$, $W\not= V$ and $W$ is $S$-invariant. Let $W'=V\setminus W$. Then, since $V$ is finite, we can easily prove that $V=W\oplus W'$ (this is my attempt to translate Maschke's theorem in vector spaces). Consequently, for every $v\in V$, there exists unique $w\in W$ and $w'\in W'$, such that $v=w+w'$. Define the projection $P:V\rightarrow V$ by $Pv=w$ for every $v\in V$.
What is left to prove is that $PB=BP$ for all $B\in S$. Then $P$ is clearly not a scalar and thus we have the contradiction we are looking for.
There is a difficulty in showing that $PB=BP$ for all $B\in S$, because $W'$ might be $S$-invariant or not. More precisely:
Let $v\in V=W\oplus W'$ and $v\not =0_V$.
If $v\in W$ then $Bv\in W$ because $W$ is $S$-invariant, therefore: $PBv=Bv$ and $BPv=Bv$.
If $v\in W'$ then $Pv=0$, hence $BPv=B\cdot 0=0$, and
(i) if $Bv\in W'$ then $PBv=0$.
(ii) if $Bv\in W$ then $PBv=Bv$ and this is where the problem occurs.
Any hints, ideas or counterexamples would be very helpful.
REPLY [4 votes]: Here's a counterexample: let $S = \{A \in \Bbb C^{2 \times 2} : Ax = x\}$ where $x = (1,0)$ (the column-vector $(1,0)$). That is, $S$ is the set of all matrices whose first column is $(1,0)$.
Now, the span of $(1,0)$ is an $S$-invariant subspace.
However, consider any $A$ that is not a multiple of the identity. If $A$ is diagonalizable, then it has eigenvectors $y_1,y_2$ with $Ay_i = \lambda_i y_i$. One of these eigenvectors is linearly independent to $x$, suppose WLOG that $y_1$ and $x$ are linearly independent. We can then define $B$ by
$$
Bx = x\\
By_1 = y_1 + y_2
$$
we find that $ABy_1 \neq BAy_1$, so that $AB \neq BA$.
Now, suppose that $A$ is not diagonalizable. It follows that $A$ is not diagonal. So, it fails to commute with
$$
B = \pmatrix{1&0\\0&2}
$$
Thus, $S$ is a counterexample to your claim.<|endoftext|>
TITLE: Modified gambler's ruin problem: quit when going bankruptcy or losing $k$ dollars in all
QUESTION [6 upvotes]: In each round, the gambler either wins and earns 1 dollar, or loses 1 dollar. The winning probability in each round is $p<1/2$. The gambler initially has $a$ dollars. He quits the game when he has no money, or he has lost $k>a$ rounds in all by this time, no matter how many rounds he wins. (For example, if $a=2$, $k=3$, and the sequence is +1,+1,+1,-1,+1,-1,-1, he quits now.) What is his expected exit time?
What confuses me is the dependence between these two events. I know the generating function of the exit time in the standard gambler's ruin problem, and the duration until the gambler loses $k$ dollars in all is a negative binomial random variable. But these two stopping times are dependent. I was wondering if anyone could give me some hint. Thanks a lot!
Update: From Ross Millikan's hint: how to calculate the probability that the wealth is $b$ at the end of round $2k-a$, given that the game is not over?
REPLY [2 votes]: For the loss of $k$ to kick in, he needs to win $k-a$ times. If he does that, he will never go broke (except maybe on the round he would quit because of the $k$ losses). He needs to win those $k-a$ within the first $2k-a$ games. So compute the chance he goes broke in less than $2k-a$ games and the expected length of a game in that scenario. This gives you the chance he invokes the $k$ losses. Now compute the expected length of a game given that he wins at least $k-a$ in the first $2k-a$<|endoftext|>
TITLE: Is $f(x)=\sum_{n=1}^\infty\frac{nx^2}{n^3+x^3}$ uniformly continuous on $[0,\infty)$?
QUESTION [17 upvotes]: Last week I had an assignment to show $f(x)=\sum\limits_{n=1}^\infty\frac{nx^2}{n^3+x^3}$ for $x\ge0$ does not converge uniformly, but I misread the question as "show $f(x)$ is not uniformly continuous."
The actual problem went on to show that $f(x)$ is continuous, but I have been stumped by the question I misread:
Is $f(x)=\sum\limits_{n=1}^\infty\frac{nx^2}{n^3+x^3}$ uniformly continuous on $[0,\infty)$?
I asked my professor about the problem today, but unfortunately we still didn't come up with an answer (and while my professor believes that $f(x)$ is not uniformly continuous, I suspect that it is).
__
Things I have proven about $f(x)$ that I can explain or one can assume in an answer: The series does not converge uniformly to $f(x)$, $f(x)$ is continuous, and $f(x)>\frac{x-1}{2}$.
REPLY [6 votes]: I think that the derivative of $f$ is indeed bounded on $[0,\infty),$ which implies $f$ is uniformly continuous there. I'll give an outline: Let's write
$$f(x) = x^2 \sum_{n=1}^{\infty} \frac{n}{n^3 + x^3}.$$
This will give
$$\tag 1 f'(x) = 2x \sum_{n=1}^{\infty} \frac{n}{n^3 + x^3} + x^2 \sum_{n=1}^{\infty} \frac{-3x^2n}{(n^3 + x^3)^2}.$$
(You verify this on any bounded interval, where all convergence in sight is uniform. Since differentiation is a local property, we get $(1)$.)
Now the right side of $(1)$ will be bounded if we show
$$\sum_{n=1}^{\infty}\frac{n}{n^3 + x^3} = O(1/x) \,\,\text { and } \sum_{n=1}^{\infty}\frac{n}{(n^3 + x^3)^2} = O(1/x^4)$$
as $x\to \infty.$ OK, I'll leave it here for now. Some things to check.<|endoftext|>
TITLE: Is the number of isomorphism classes of quotients of a finite dimensional commutative ring over a field finite?
QUESTION [5 upvotes]: If $A$ is a finite dimensional unital and commutative algebra over some infinite field $k$, what is the number of isomophism classes of rings of the form $A/I$ where $I$ is a proper ideal of $A$? Is it finite?
Certainly, for dimensions 7 and below (by the results of this paper) the answer to the finite question will be yes, as the number of isomorphism classes of algebras of dimension $n \leq 6$ is finite. Is the number of isomorphism classes of such rings still finite if the dimension of $A$ is greater than 7?
REPLY [2 votes]: The infinitely many $7$-dimensional algebras described in Poonen's paper are all quotients of the finite dimensional algebra $A=k[w,x,y,z]/\mathfrak{m}^3$ (where $\mathfrak{m}$ is the ideal $(w,x,y,z)$). So $A$ is a $15$-dimensional counterexample.<|endoftext|>
TITLE: Sum of nth powers of Fibonacci numbers
QUESTION [6 upvotes]: Is a closed form for
$$\sum_{i=1}^n{F_i^k}$$
(where $F_i$ is the $i^{th}$ Fibonacci number and $k$ is constant) known?
REPLY [2 votes]: Consider a generalized Fibonacci sequence $\{U_n\}$ with $U_n=U_n(p,q)$ defined as $U_0=0$, $U_1=1$, and $U_{n+2}=-qU_n+pU_{n+1}$ for all $n\geq0$. It is prove that
$$\sum_{i=1}^nU_i^m=\frac{1}{\sum_{i=1}^m{m\atopwithdelims\{\}i}}\left(\sum_{i=1}^nU_{im-\binom{m+1}{2}}+\sum_{i=1}^m{m\atopwithdelims\{\}i}\sum_{j=1}^i(U_{n-i+j}^m-U_{j-i}^m)\right),$$
in which
$${m\atopwithdelims\{\}i}=\left(\prod_{\underset{j=1}{j\neq i}}^mU_{j-i}\right)^{-1}.$$
Also, note that in the above formula
$$\sum_{x=1}^nU_{ax+b}=\frac{q^aU_{na+b}-U_{(n+1)a+b}-q^aU_{b-a}+U_b}{1+q^a-V_a}-U_b$$
for all integers $a,b$. See here.<|endoftext|>
TITLE: Number of disjoint circles in half plane minus a disk that touch both boundary components
QUESTION [7 upvotes]: Let $\Omega \subset \mathbb{C}$ be right half-plane, with the disc $D$ removed, where $D$ is the disk of radius $r=3$ centered at $z_0=5$. What is the maximum number of disjoint open disks in $\Omega$ whose boundary touch both boundary components of $\Omega$?
This is a question I had on my qualifying exam years ago; I couldn't solve it then, and today I stumbled upon it again and it nags me that I can't find a solution. Or at least, one without using any table or any map that I couldn't come up with on my own during an exam.
What I did (then and now) is to transform conformally the right half plane into the unit circle, and then see where the disk $D$ is mapped. Then my idea was to count the number of circles in this scenario. If the circles are not those coming from vertical lines in the right half plane, then they will come from circles in the original domain $\Omega$ (a simple test for a circle not to be a "bad" circle, is not to be centered on the real line, for example).
That is,
$$
f(z)=\frac{z-1}{z+1}
$$
maps $\Omega$ to the unit disk with the disk $\tilde{D}$ removed, where $\tilde{D}$ is the disk centered at $w_0=\frac{11}{15}$ of radius $\frac{2}{15}$ (this follows from some elementary but tedious computations which I won't include in the question).
Now, I don't know how many disjoint circles touching both boundary components of $f(\Omega)$ I can fit in it, nor do I know how to guarantee that the answer is invariant under $f^{-1}$.
REPLY [4 votes]: Just adding a graphical figure to Martin R's answer.
The transformation used in the figure differs slightly from Martin's in that
$$f(z)=4+\frac{16}{z-4}=\frac{4z}{z-4}=2\left(1+T(z)\right)$$
but both are equally usable. The important point is that the inversion circle has its center at $c=4$. Well, $c=-4$ would have worked as well, both possibilities follow from requiring constant annulus width:
$$f(0)-f(z_0-r) = f(z_0+r)-f(\infty)$$
The solid black circles are $\partial D$
and an exemplary set of solution circles.
The dash-dotted circle represents the inversion circle associated with
$\bar{f}$.
The gray circles are the boundaries of $f(\Omega)$.
The dashed circles are the images of the solution circles under $f$,
these form a Steiner chain with respect to the gray circles.
Two of the dashed circles disappear under solid circles,
those are just exchanged by $f$.
Note that the contents of the annulus could be rotated and still yield
valid solution circles under the inverse transform, with one exception: No dashed circle shall touch the inversion center $c$, otherwise it maps back
to a (vertical) line which does not touch the $y$-axis
(in $\mathbb{C}$, that is).<|endoftext|>
TITLE: Mean value theorem for a gradient of convex function
QUESTION [7 upvotes]: This is from an article, page 19. Let $J(u)=\sum \sqrt
{u_i^2+\epsilon}$, and $p^{k+1}=\nabla J(u^{k+1})$, $p^{k}=\nabla
J(u^{k})$. Since $J$ is convex, the mean value theorem tells us that
$$p^{k+1}-p^{k} = D^{k+\frac{1}{2}}(u^{k+1}-u^k) $$ where
$D^{k+\frac{1}{2}}$ is a diagonal matrix such that
$$D^{k+\frac{1}{2}}_{i,i} = \epsilon
((u_i^{k+\frac{1}{2}})^2+\epsilon)^{-3/2}$$ for some
$u^{k+\frac{1}{2}}$ between $u^k$ and $u^{k+1}$.
But I can't understand why there is such $D^{k+\frac{1}{2}}$. Let $f(u)=\sqrt {u^2+\epsilon}$, then $f'(u)=u/\sqrt{u^2+\epsilon}$, $f''(u)=\epsilon/\sqrt{u^2+\epsilon}^3$. I thought 2 implementation of MVT:
Method1: If we use MVT for components of $p^{k+1}-p^{k}$, then we obtain separately $u_i^{k+\frac{1}{2}}$, which is on the segment $[u_i^{k},u_i^{k+1}]$, but the whole $u^{k+\frac{1}{2}}$ may not be on the segment $[u^{k},u^{k+1}]$. I'm not sure the author meant this.
Method2: If we use MVT for $g(t)=\nabla J (u^{k+1}t + u^k (1-t))\cdot(u^{k+1}-u^k) = \sum \frac{u_i}{\sqrt{u_i^2+\epsilon}} (u_i^{k+1}-u_i^k)$, where $u_i=u_i^{k+1}t + u_i^k (1-t)$ then we have $g(1)-g(0)=g'(c) \Rightarrow (p^{k+1}-p^{k}) \cdot (u^{k+1}-u^k)= \sum \frac{\epsilon}{\sqrt{(u_i^{k+\frac{1}{2}})^2+\epsilon}^3}(u_i^{k+1}-u_i^k)^2$ where $u^{k+\frac 1 2}=u_i^{k+1}c + u_i^k (1-c)$ is on the segment $[u^{k},u^{k+1}]$. But how can we conclude that $p^{k+1}-p^{k} = D^{k+\frac{1}{2}}(u^{k+1}-u^k)$? Can we use convexity?
Summary: I want to know where to use MVT, and convexity.
REPLY [3 votes]: Given the $J$ he has, note
$$ \nabla J_i = \frac{u_i}{\sqrt{u_i^2 +\epsilon}} $$
Thus
$$ \nabla^2 J_{ij} = \frac{\partial^2 J}{\partial u_i \partial u_j}= \delta_{ij} \left ( \frac{1}{\sqrt{u_i^2 +\epsilon}} - \frac{u_i^2}{(u_i^2 +\epsilon)^{3/2}}\right)=\frac{\delta_{ij} \epsilon}{(u_i^2 + \epsilon)^{3/2}}$$
The mean value theorem here is given by for $\nabla J_i$ is given by
$$ \nabla J_i (x) - \nabla J_i (y) = \nabla (\nabla J_i(u_i))\cdot (x-y) $$
with $u_i$ between $x_i$ and $y_i$. Thus
$$ \nabla J (x) - \nabla J (y) = \nabla^2J(u)\cdot ( x-y) $$
NOTE that $u$ may not be on the line between $x$ and $y$. Convexity isn't really playing a direct role here, since you have a formula for $J$. But you could probably make an argument in the general case using convex $\iff \nabla^2 J \geq 0$ (positive semi-definite).<|endoftext|>
TITLE: How many ways a 9 digit number can be formed using the digits 1 t0 9 without repetition such that it is divisble by $11$.
QUESTION [7 upvotes]: How many ways a 9 digit number can be formed using the digits 1 t0 9 without repetition such that it is divisible by $11$.
My attempt-
A number is divisible by 11 if the alternating sum of its digit is divisible by 11?
The other thing to notice is as it is a 9 digit number formed by digits 1 to 9, exactly once each digit from 1 to 9 will appear in the number.
Basically, the question boils down to how many ways we can arrange 123456789 so that the alternating sum of the digit is divisible by 11.
I am not able to proceed further. Any help would be appreciated.
REPLY [2 votes]: $$1+2+...+9=\frac{9\cdot10}{2}=45\\(x_1+x_3+x_5+x_7+x_9)-(x_2+x_4+x_6+x_8)=11m$$
$m$ must be $1$ or $3$ since 45 is odd and $3$ is discarded immediately so we have
$$\begin{cases}X+Y=45\\X-Y=11\end{cases}$$ Hence $X=28$ and $Y=17$ We work with $28$.
1) With $9$ and $8$ one has $x_1+x_2=11$ and $x_1+x_2+x_3=11$ (because $28-(9+8)=11$).
$x_1+x_2=7+4=6+5$ give $2$ cases.
$x_1+x_2+x_3=7+3+1=6+4+1=6+3+2=5+4+2$ give $4$ cases.
2) With $9$ and $7$ without $8$ one has $x_1+x_2=12$ and $x_1+x_2+x_3=12$.
$x_1+x_2=12$ no cases since $6+5=11\lt12$.
$x_1+x_2+x_3=6+5+1=6+4+2=5+4+3$ give $3$ cases.
With $9$ and $6$ without $7$ and $8$ is not possible because $9+6=15$ and $5+4=9\lt 13$.
3) With $8$ and $7$ without $9$ one has $x_1+x_2$ not possible and
$x_1+x_2+x_3=6+5+2=6+4+3$ give $2$ cases.
Hence there are $2+4+3+2=11$ possible cases each of these corresponding to permutations of $5$ and $4$ digits.
Thus there are $11\cdot4!\cdot5!=\color{red}{31680}$ possibilities.<|endoftext|>
TITLE: A slightly problematic integral $\int{1/(x^4+1)^{1/4}} \, \mathrm{d}x$
QUESTION [10 upvotes]: Question. To find the integral of- $$\int {\frac{1}{(x^4+1)^\frac{1}{4}} \, \mathrm{d}x}$$
I have tried substituting $x^4+1$ as $t$, and as $t^4$, but it gives me an even more complex integral. Any help?
REPLY [10 votes]: Let $$I = \int\frac{1}{(x^4+1)^{\frac{1}{4}}}dx$$
Put $x^2=\tan \theta,$ Then $2xdx = \sec^2 \theta d\theta$
So $$I = \int\frac{\sec^2 \theta}{\sqrt{\sec \theta}}\cdot \frac{1}{2\sqrt{\tan \theta}}d\theta = \frac{1}{2}\int\frac{1}{\cos \theta \sqrt{\sin \theta}}d\theta = \frac{1}{2}\int\frac{\cos \theta}{(1-\sin^2 \theta)\sqrt{\sin \theta}}d\theta$$
Now Put $\sin \theta = t^2\;,$ Then $\cos \theta d\theta = 2tdt$
So $$I = \int\frac{1}{1-t^4}dt = -\int\frac{1}{(t^2-1)(t^2+1)}dt = -\frac{1}{2}\int\left[\frac{1}{1-t^2}+\frac{1}{1+t^2}\right]dt$$
So $$I = \frac{1}{2}\ln \left|\frac{t-1}{t+1}\right|-\frac{1}{2}\tan^{-1}(t)+\mathcal{C}$$<|endoftext|>
TITLE: Cyclic Galois group of even order and the discriminant
QUESTION [5 upvotes]: I am stuck on the following problem:
Let K be a field of characteristic $\neq 2$ and $f\in K[X]$ a
separable irreducible polynomial with roots $\alpha_1,\ldots \alpha_n$
in a splitting field $L$ of $f$ over $K$. The Galois group of $f$ is
cyclic of even order. Show that the discriminant
$\Delta=\prod_{i
TITLE: Bound for the degree
QUESTION [5 upvotes]: Let $K$ be a perfect field and let $f\in K[x]$ be a monic irreducible polynomial of degree $n$.
Denote by $\alpha,\beta$ two distinct roots of $f$.
Is the following bound true?
$$
[K(\alpha-\beta):K]\geq \frac n2
$$
If not, does someone know a similar bound (if it exists)?
REPLY [5 votes]: This bound does not hold. Consider the Artin-Schreier polynomial, $$f(X)=X^p-X+1\in\mathbb{F}_p[X]$$
Note that $f(\alpha)=0\implies f(\alpha+1)=0$. Take $\beta=\alpha+1$ to obtain a contradiction to the proposed bound.<|endoftext|>
TITLE: Definition of contractible chain complex
QUESTION [6 upvotes]: A relatively simple question. A book I'm reading states "a complex homotopic to the zero complex is called contractible"... but I don't understand the statement.
I know what it means for chain maps to be homotopic, but not chain complexes themselves. What does it mean for a complex to be homotopic to the zero complex?
REPLY [5 votes]: Well the statement isn't very precise. Indeed, it's maps that can be homotopic, while complexes can be homotopy equivalent. Two complexes $C$, $D$ are homotopy equivalent if there exists chain maps $f : C \to D$ and $g : D \to C$ such that $f \circ g$ is homotopic to $\operatorname{id}_D$ and $g \circ f$ is homotopic to $\operatorname{id}_C$.
Finally, a chain complex $C$ is said to be contractible if it is homotopy equivalent to the zero complex. If you unpack the definition, it means that there exists linear maps $h : C_n \to C_{n+1}$ such that for all $x \in C$,
$$x = dh(x) + h(dx).$$
Note that this is not equivalent to having vanishing homology (a condition called acyclicity). For example, the following chain complex is acyclic but not contractible:
$$\dots \to 0 \to \mathbb{Z} \xrightarrow{2 \cdot} \mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0.$$
Indeed if it were contractible, then the exact sequence would split, which isn't the case.<|endoftext|>
TITLE: Prove that a holomorphic function injective in an annulus is injective in the whole ball
QUESTION [6 upvotes]: Let $f: B(0,1) \rightarrow \mathbb{C}$ be holomorphic and suppose $\exists\ r \in (0,1)$ such that $f$ is injective in $A = \{z \in \mathbb{C} : r < |z| < 1\}$. Prove that $f$ is injective.
I tried using Rouché theorem, or the identity theorem, but I don't know what to do. Any hints? :)
REPLY [6 votes]: This is quite non-trivial. Suppose $z_1,z_2\in B(0,1)$ are such that $f(z_1)=f(z_2)=z_0$. Take a circle $\gamma\in A$ such that $z_1,z_2\in Int(\gamma)$. Then observe that $f(\gamma)$ is a Jordan curve (this is where you use the injectivity of $f$ in $A$) and thus by Jordan Curve Theorem, $\text{Ind}(f(\gamma),z_0)\leq1$, ignoring orientation. However by the argument principle, $$\text{Ind}(f(\gamma),z_0)=\dfrac{1}{2\pi i} \int_\gamma\dfrac{f'}{f-z_0}\geq2$$
This gives the required contradiction.<|endoftext|>
TITLE: Is this contraction of metric tensor derivatives symmetric?
QUESTION [8 upvotes]: A couple of times when I've tried to prove symmetries of various tensors (for learning), I've ended up with the expression below, and the fact that either a) I made mistake, or b) the expression is symmetric with respect to switching k and l.
$$
\frac{\partial g_{ij}}{\partial x^k} \frac{\partial g^{ij}}{\partial x^l}
$$
Where $g_{..}$ and $g^{..}$ are the covariant and contravariant metric tensor respectively, and $x^.$ is the coordinate.
Is the expression symmetric wrt switching $k$ and $l$? If so, is it possible to prove this using only indicial notation?
REPLY [13 votes]: Since the product rule tells us $0 = \partial( g g^{-1} ) = (\partial g) g^{-1} + g (\partial g^{-1})$, we have a formula for the derivative of the inverse metric:
$$ \partial_l g^{ij} = -g^{ia} g^{jb} \partial_l g_{ab}.$$
Substituting this in to your expression we get
$$ -g^{ia} g^{jb} \partial_l g_{ab} \partial_k g_{ij}.$$
If we swap the dummy indices $a \leftrightarrow i$, $b \leftrightarrow j$ then this is equal to
$$ -g^{ai} g^{jb} \partial_l g_{ij} \partial_k g_{ab};$$
so it's symmetric in $k$ and $l$.<|endoftext|>
TITLE: Degree of the difference of two roots
QUESTION [7 upvotes]: Let $f\in \mathbb Q[x]$ be an irreducible monic polynomial of degree $n$ and let $\alpha,\beta\in \overline{\mathbb Q}$ be two distinct roots of $f$.
Is it possible to find a lower bound on the degree of $\alpha-\beta$?
By heart, my claim is that
$$
[\mathbb Q(\alpha-\beta):\mathbb Q]\geq \frac n2
$$
The original question Bound for the degree concerned the same claim for arbitrary fields.
If the claim is false, can someone find a bound, if it exists?
REPLY [5 votes]: It is possible that $\alpha-\beta$ is algebraic of degree $
TITLE: Under what type of transformations are characteristic classes and characteristic numbers of a manifold invariant?
QUESTION [6 upvotes]: When I say "characteristic class of a manifold" I mean the characteristic class of the tangent bundle. I assume that all Chern/Pontrjagin classes/numbers are invariant under diffeomorphism, if they are not I will be very confused.
Then I hear talk about "topological invariants of smooth manifolds". I think this is one of my difficulties. Does this means that if you have two smooth manifolds and a homeomorphism between these then the invariant is preserved and has nothing to do with the chosen smooth structure? It seems like a strange thing to say.
I believe that Milnor has shown that the integer Pontrjagin classes are NOT topological invariants, and Novikov has proved it is true that the rational pontrjagin classes are. I haven't seen anything indicating either way for Chern classes.
What about characteristic numbers then? The signature of a manifold is a topological invariant so certainly certain combinations of characteristic numbers can be, but is it true in general? The literature is usually a bit advanced so I don't think I could go through it easily but of course these are important basic questions, if anyone could clarify that would be great.
I would like to know most about Chern/Pontrjagin classes/numbers but if anyone has something else to throw in I'll gladly take it :)
REPLY [3 votes]: The oriented four-dimensional manifold $ \mathbb{CP}^2\#\overline{\mathbb{CP}^2}$ admits a complex structure consistent with it's orientation, namely the one which comes from the blow-up of $\mathbb{CP}^2$ at a point. Note that $$H^2(\mathbb{CP}^2\#\mathbb{CP}^2; \mathbb{Z}) \cong H^2(\mathbb{CP}^2; \mathbb{Z})\oplus H^2(\overline{\mathbb{CP}^2}; \mathbb{Z}) \cong \mathbb{Z}a \oplus \mathbb{Z}b.$$
With it's standard complex structure, $c_1(\mathbb{CP}^2\#\overline{\mathbb{CP}^2}) = 3a - b$. However, $\mathbb{CP}^2\#\overline{\mathbb{CP}^2}$ also admits complex structures with first Chern class $3a + b$, $-3a + b$ and $-3a-b$. In particular, the first Chern class is not invariant under diffeomorphism.
The above statement can be deduced from the following general facts.
Suppose $E \to X$, $F \to X$ are real vector bundles and $\phi : E \to F$ is a vector bundle isomorphism. If $J$ is an almost complex structure on $F$, then $\phi^{-1}\circ J\circ\phi$ is an almost complex structure on $E$. Equipping $E$ and $F$ with these almost complex structures, they can be viewed as complex vector bundles and $\phi$ becomes a complex vector bundle isomorphism. In particular, $c_i(E) = c_i(\phi^*F) = \phi^*c_i(F)$.
In the special case where $X$ is a smooth manifold, $E = F = TX$, and $\phi = f_*$ where $f : X \to X$ is a diffeomorphism, we also have $N_{f_*^{-1}\circ J\circ f_*}(V, W) = f_*^{-1}N_J(f_*V, f_*W)$. Therefore $f_*^{-1}\circ J\circ f_*$ is integrable if and only if $J$ is integrable. If they are integrable, then $f : X \to X$ is a biholomorphism.
There is a diffeomorphism $f : \mathbb{CP}^2 \to \mathbb{CP}^2$ which acts by $-1$ on $H^2(\mathbb{CP}^2; \mathbb{Z})$; see the beginning of this answer. As in that answer, one can extend this to a self-diffeomorphism of $\mathbb{CP}^2\#\overline{\mathbb{CP}^2}$ which can act by $1$ on $a$ and $-1$ on $b$, or $-1$ on $a$ and $1$ on $b$, or $-1$ on $a$ and $-1$ on $b$. Combining with the statements above, we obtain complex structures on $\mathbb{CP}^2\#\overline{\mathbb{CP}^2}$ with first Chern class $3a + b$, $-3a - b$, and $-3a + b$ respectively.
By construction, $\mathbb{CP}^2\#\overline{\mathbb{CP}^2}$ equipped with any of these complex structures is biholomorphic to the standard $\mathbb{CP}^2\#\overline{\mathbb{CP}^2}$.<|endoftext|>
TITLE: Noteworthy examples of finite categories
QUESTION [8 upvotes]: So far all the finite categories I have encountered fall into one of these c̶a̶t̶e̶g̶o̶r̶i̶e̶s̶ sets:
finite monoids
finite preorders
just formal devices to explain, what a "diagram" in another (infinite) category is
Are there any other finite categories, which are not monoids or preorders, which are interesting by themselves?
REPLY [6 votes]: One example is what is called a fusion system.
A fusion system is a category where the objects are the subgroups of some fixed $p$-group $S$ and where the morphisms is a subset of the set of injective homomorphisms between the subgroups which contains all those induced by conjugation by some element from $S$.
Further, it is required that any morphism $\varphi$ from $P$ to $Q$ factors through the inclusion of $\varphi(P)$ into $Q$ and that the inverse homomorphism $\varphi^{-1}: \varphi(P)\to P$ is also in the category.
These are meant as a generalization of the fusion structure on the set of subgroups of a $p$-Sylow subgroup of a finite group, and they have been studied extensively over the past 10 years or so, especially by people in various areas if algebraic topology.<|endoftext|>
TITLE: Example of a set and monotone class where monotone class is not a $\sigma$-algebra?
QUESTION [5 upvotes]: What is an example of a set $X$ and a monotone class $\mathcal{M}$ consisting of subsets of $X$ such that $\emptyset \in \mathcal{M}$, $X \in \mathcal{M}$, but $\mathcal{M}$ is not a $\sigma$-algebra?
REPLY [3 votes]: Recall that a σ-algebra for $X$ is a collection of subsets $\Sigma$ of $X$ such that:
The empty set and the whole set $X$ belong to $\Sigma$.
$\Sigma$ is closed under all countable unions.
$\Sigma$ is closed under all countable intersections.
$\Sigma$ is closed under complementation.
Recall that a monotone class for $X$ is a collection of subsets $\mathcal{M}$ of $X$ such that:
The whole set $X$ belongs to $\mathcal{M}$.
$\mathcal{M}$ is closed under unions of monotonically increasing collections $A_1 \subseteq A_2 \subseteq \ldots $ of sets in $\mathcal{M}$.
$\mathcal{M}$ is closed under intersections of monotonically decreasing collections $B_1 \supseteq B_2 \supseteq \ldots $ of sets in $\mathcal{M}$.
So suppose $\mathcal{M}$ is a monotone class for the set $X$, and suppose furthermore that $\varnothing \in \mathcal{M}$. Then $\mathcal{M}$ satisfies the first property of being a σ-algebra, but may fail in the other three. Let's find examples for each.
Failing the countable union property
Put $X = \{1,2,3\}$ and $\mathcal{M} = \{\varnothing, \{1\},\,\{2\},\{3\}, X\}$.
We can easily show that $\mathcal{M}$ is a monotone class. However, $\mathcal{M}$ fails the countable union property, since $\{1\} \in \mathcal{M}$ and $\{2\}\in \mathcal{M}$ but $\{1\}\cup \{2\} = \{1,2\} \notin \mathcal{M}$.
The trick here was to generate an example that is closed under unions of sets in a monotone collection, but not closed under unions of sets like $\{1\}$ and $\{2\}$ which are not related by the $\subseteq$ relation.
Failing the countable intersection property
Put $X = \{1,2,3\}$ and $\mathcal{M} = \{\varnothing, \{1,2\},\,\{2,3\}, X\}$.
We can easily show that $\mathcal{M}$ is a monotone class. However, $\mathcal{M}$ fails the countable intersection property, since $\{1,2\} \in \mathcal{M}$ and $\{2,3\}\in \mathcal{M}$ but $\{1,2\}\cap \{2,3\} = \{2\} \notin \mathcal{M}$.
I generated this example using essentially the same trick as before.
Failing the complementation property
In fact, both of our examples above fail to have the complementation property, as you can show. Another example, mentioned in the comments, is:
$X = \{1,2\}$, $\mathcal{M} = \{\varnothing, \{1\}, X\}$.
Here, $\{1\} \in \mathcal{M}$ but its complement $\{2\} \notin \mathcal{M}$.
Hope this helps!
Side note: the "complementation" example above has all the properties of a σ-algebra except for complementation. You might try to construct analogous examples of, e.g. a monotone class that satisfies all the properties of a σ-algebra except countable unions— however, this is impossible. For example, if a collection $\mathcal{M}$ is closed under complementation and countable unions, then it is necessarily closed under countable intersections:
For any sequence $\{A_i\}_{i=1}^\infty$ in $\mathcal{M}$,
$$\bigcap_{i=1}^\infty A_i = \left(\bigcup_{i=1}^\infty A_i^\mathsf{C}\right)^\mathsf{C}.$$
All of the $A_i^\mathsf{C}$ are in $\mathcal{M}$ since $\mathcal{M}$ is closed under complementation; hence, so is their union $A=\cup_i A_i^\mathsf{C}$, since $\mathcal{M}$ is closed under countable unions. Finally, then so is $A^{\mathsf{C}} = \cap_{i=1}^\infty A_i$ since $\mathcal{M}$ is closed under complementation.<|endoftext|>
TITLE: Indefinite integral with residue theorem
QUESTION [5 upvotes]: I tried to solve the following integral using residue theorem. $$\int_0^\infty\frac{x}{\sinh x} ~\mathrm dx=\int_{-\infty}^\infty\frac{x}{e^x-e^{-x}}~\mathrm dx$$
$e^x-e^{-x}=0$ when $x=n\pi i, n\subset\mathbb Z$
So the residues are (when n is a positive integer)
$$\frac{(-1)^n n\pi i}{2}$$
Thus the value of definite integral will be
$$2\pi i\sum_{n=1}^\infty \frac{(-1)^n n\pi i}{2}=\pi^2(1-2+3-4+5-\ldots)$$
But the series diverges obviously. Here I used the technique that
$$A=1-1+1-1+1-\ldots$$
$$A=1-(1-1+1-1+1-\ldots)$$
$$A=1-A, A=\frac{1}{2}$$
$$B=1-2+3-4+5-6+7-\ldots$$
$$B=(1-1+1-1+1-\ldots)-(1-2+3-4+5-\ldots)$$
$$B=A-B, B=\frac{1}{4}$$
Thus the integral value is $\frac{\pi^2}{4}$.
Although the value itself is correct, I think this method is still controversial. How can this method become justified? Or is there a problem in my residue theorem solution?
REPLY [2 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Leftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[2]{\,\mathrm{Li}_{#1}\left(\,{#2}\,\right)}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
This is another posibility to evaluate the integral along a complex plane contour:
\begin{align}
\color{#f00}{\int_{0}^{\infty}{x \over \sinh\pars{x}}\,\dd x} & =
\half\int_{-\infty}^{\infty}{x \over \sinh\pars{x}}\,\dd x =
\int_{-\infty}^{\infty}{x\expo{x} \over \expo{2x} - 1}\,\dd x\
\stackrel{\expo{x}\ \mapsto\ x}{=}\
\int_{0}^{\infty}{\ln\pars{t} \over t^{2} - 1}\,\dd x
\end{align}
The integral is evaluated along a $\mathit{\mbox{key-hole}}$ contour which takes care of the $\ds{\ln\pars{z}}$ $\mathit{\mbox{branch-cut}}$ along the 'positive real axis'. Namely,
$$
\ln\pars{z} = \ln\pars{\verts{z}} + \,\mathrm{arg}\pars{z}\ic\,,\ 0 < \,\mathrm{arg}\pars{z} < 2\pi\,,\ z \not= 0
$$
Along the above mentioned contour, we evaluate the integral
$$
\int{\ln^{2}\pars{z} \over z^{2} - 1}\,\dd z
$$
which has one pole $\ds{\pars{= -1}}$ 'inside' the contour. The
forthcoming $\ds{\ \ic\,0^{\pm}\ }$ 'factors' will take care of the singularity at $\ds{x = +1}$ along the $\ds{\ln}$ branch-cut.
$$
2\pi\ic\,{\braces{\ln\pars{\verts{-1}} + \pi\ic}^{\, 2} \over -1 - 1} =
\int_{0}^{\infty}
{\bracks{\ln\pars{t} + 0\,\ic}^{\, 2} \over
\pars{t - 1 + \ic 0^{+}}\pars{t + 1}}\,\dd t
+
\int_{\infty}^{0}
{\bracks{\ln\pars{t} + 2\pi\ic}^{\, 2} \over
\pars{t - 1 - \ic 0^{-}}\pars{t + 1}}\,\dd t
$$
\begin{align}
\pi^{3}\,\ic & =
\mathrm{P.V.}\int_{0}^{\infty}{\ln^{2}\pars{t} - \ln^{2}\pars{t} - 4\pi\ic\,\ln\pars{t} - 4\pi^{2} \over t^{2} - 1}\,\dd t
\\[3mm] &\ +\ \overbrace{\int_{0}^{\infty}{\ln^{2}\pars{t} \over t + 1}\,
\bracks{-\pi\ic\,\delta\pars{t - 1}}\,\dd t}^{\ds{=\ 0}}
\\[3mm] & -
\int_{0}^{\infty}{\bracks{\ln\pars{t} + 2\pi\ic}^{\, 2}\over t + 1}\,
\bracks{\pi\ic\,\delta\pars{t - 1}},\dd t
\\[1cm] & =
-4\pi\ic\int_{0}^{\infty}{\ln\pars{t} \over t^{2} - 1}\,\dd t -
4\pi^{2}\,\mathrm{P.V.}\int_{0}^{\infty}{\dd t \over t^{2} - 1} + 2\pi^{3}\ic
\\[1cm] \imp
\int_{0}^{\infty}{\ln\pars{t} \over t^{2} - 1}\,\dd t & =
{\pi^{3}\ic - 2\pi^{3}\ic \over -4\pi\ic} -
{4\pi^{2} \over -4\pi\ic}\,\mathrm{P.V.}\int_{0}^{\infty}{\dd t \over t^{2} - 1}
\\[3mm] & = \color{#f00}{\pi^{2} \over 4} -
\pi\ic\,\mathrm{P.V.}\int_{0}^{\infty}{\dd t \over t^{2} - 1}\tag{1}
\end{align}
Note que
\begin{align}
\mathrm{P.V.}\int_{0}^{\infty}{\dd t \over t^{2} - 1} & =
\lim_{\epsilon \to 0^{+}}\pars{\int_{0}^{1 - \epsilon}{\dd t \over t^{2} - 1} +
\int_{1 + \epsilon}^{\infty}{\dd t \over t^{2} - 1}}
\\[3mm] & =
\lim_{\epsilon \to 0^{+}}\pars{\int_{0}^{1 - \epsilon}{\dd t \over t^{2} - 1} +
\int_{1/\pars{1 + \epsilon}}^{0}{\dd t \over t^{2} - 1}} =
\lim_{\epsilon \to 0^{+}}
\int_{1 - \epsilon}^{1/\pars{1 + \epsilon}}{\dd t \over 1 - t^{2}} = 0
\end{align}
\begin{align}
&\mbox{because}
\\[3mm] &\
0 < \verts{\int_{1 - \epsilon}^{1/\pars{1 + \epsilon}}{\dd t \over 1 -t^{2}}}
<\verts{\pars{{1 \over 1 + \epsilon} - 1 + \epsilon}\,{1 \over
1 - 1/\pars{1 + \epsilon}^{2}}} =
\verts{\epsilon\pars{1 + \epsilon} \over 2 + \epsilon} \to
\stackrel{\epsilon\ \to\ 0}{0}
\end{align}
With this result and expression $\pars{1}$:
$$
\color{#f00}{\int_{0}^{\infty}{x \over \sinh\pars{x}}\,\dd x} =
\int_{0}^{\infty}{\ln\pars{t} \over t^{2} - 1}\,\dd t =
\color{#f00}{\pi^{2} \over 4}
$$<|endoftext|>
TITLE: Why can't we use the law of cosines to prove Fermat's Last Theorem?
QUESTION [5 upvotes]: In investigating approaches to Fermat's Last Theorem I came across the following and I can't figure out where I am going wrong. Any input would be greatly appreciated.
We want to show that $a^n + b^n = c^n$ cannot hold for odd $n>1$ and pairwise relatively prime $a$, $b$, and $c$. Assuming by way of contradiction that we have $a^n + b^n = c^n$ we must have $a$, $b$, and $c$ forming the sides of a triangle since $(a+b)^n > c^n$ so $a+b>c$. Therefore the law of cosines can apply and we can write:
$$c^2 = a^2+b^2 - 2ab{\cos{C}}$$
where $C$ is the angle opposite to side $c$. If we add and subtract $2ab$ on the right-hand side we get
$$c^2 = {(a+b)}^2 -2ab(\cos{C}+1)$$
Now, $a+b$ and $c$ share a common factor since $(a+b) | (a^n+b^n)$ for odd $n$ and $c^n = a^n+b^n$. (Here $x | y$ means as usual, "$x$ divides $y$").Therefore, they share the same factor with $2ab(\cos{C}+1)$.
Now, $\cos{C} + 1$ must be a rational number since $a$, $b$, and $c$ are all integers. So let $\cos{C} +1 = \frac{r}{s}$ where $r$ and $s$ are integers and $(r,s)=1$. (i.e. $\frac{r}{s}$ is a reduced fraction). (Here, $(r,s)$ means as usual the greatest common divisor of $r$ and $s$.)
Now assuming $a$, $b$, and $c$ are relatively prime we must have $(ab) |s$ for otherwise $c$ and $2ab$ would share a common factor. Even moreso we must have $ab=s$ since otherwise $\frac{2abr}{s}$ would not be an integer. (Since $c - a - b$ is even, we don't need $2 | s$). So we can write:
$$\cos{C}+1 = \frac{r}{ab}$$ or equivalently $$\cos{C} = \frac{r - ab}{ab}$$
Now we had from the law of cosines:
$$c^2 = a^2+b^2 - 2ab{\cos{C}}$$
so making the substitution $\cos{C} = \frac{r - ab}{ab}$ we get
$$c^2 = a^2 + b^2 - 2r + 2ab$$
If we subtract $a^2$ to both sides and factor out the $b$ on the right-hand side, we get:
$$c^2 - a^2 = b(b + 2a) - 2r$$
Now, $(c - a) | (c^2 - a^2)$ and also $(c-a) | (c^n - a^n)$. Then we must have $((c-a),b) >1$ since $b^n = c^n - a^n$. From the equation above, we must therefore also have $(b,2r) > 1$. Similarly we can show that we must have $(a,2r) > 1$.
However, both of these conclusions are problematic since $r$ was initially assumed to be relatively prime to $s = ab$. The only other option is that $a$ and $b$ are both even, but this is also problematic since $a$ and $b$ are assumed to be relatively prime.
Thus we cannot have $a^n + b^n = c^n$ for odd $n>1$ and pairwise relatively prime $a$, $b$, and $c$.
I'm sure someone has thought of this approach before so where am I going wrong?
REPLY [3 votes]: As Maik points out in the accepted answer we don't necessary need $ab=s$ in order to ensure that $a$, $b$, and $c$ are pairwise relatively prime. We do however require that $(c,\dfrac{2abr}{s})>1$ since $(c, (a+b) )>1$.
Now we need $s|ab$ because otherwise we would not get an integer value for $\dfrac{2abr}{s}$. Also we cannot have $(c,2ab)>1$ because this would imply that $a$, $b$, and $c$ share a common factor. (As noted in some of the comments to my original question we could have $c$ being even if $a$ and $b$ are both odd, but then $4|c$, $4|{(a+b)}^2$ so we would need $4|2ab$ implying that either $a$ or $b$ is even, a contradiction.)
What I forgot was that we can still have $(c,r)>1$ and thus avoid any contradictions with $a$, $b$ and $c$ being pairwise relatively prime.<|endoftext|>
TITLE: Is the Fibonacci constant $0.11235813213455...$ a normal number?
QUESTION [13 upvotes]: Recall that a normal decimal number is an irrational number $\alpha \in \mathbb{R}$ such that each digit 0-9 appears with average frequency tending toward $\frac{1}{10}$, each pair of digits 00-99 appears with average frequency tending toward $\frac{1}{100}$ in the decimal expansion of $\alpha$, etc.
Since the Champernowne constant $$0.12345678910111...$$ obtained by concatenating natural numbers is known to be normal, and since the Copeland-Erdős Constant $$0.23571113171923...$$ obtained by concatenating prime numbers is known to be normal, and since the Besicovitch constant $$0.14916253649648...$$ obtained by concatenating the squares of natural numbers is known to be normal, it is natural to consider whether or not the "Fibonacci constant" $$0.11235813213455...$$ obtained by concatenating consecutive entries in the Fibonacci sequence is normal in base $10$.
This problem has been considered previously in the linked arXiv article, although the "proof" given in this article is erroneous. So it is natural to ask:
(1) Is the Fibonacci constant $0.11235813213455...$ normal in base $10$?
(2) Is the Fibonacci constant $0.11235813213455...$ known to be normal in base $10$?
REPLY [7 votes]: On Edit: code completely revised (and now debugged!)
It is an interesting question. The answer is probably "yes", though I have no idea how to prove it.
It can't hurt to write a program to explore it. Here is a simple Python3 function to explore the digit-block distributions:
import statistics
def fibConst(n):
#Generator for first n digits of the Fibonacci constant
F1 = 0
F2 = 1
pool = list(str(F2))
pool.reverse() #list of digits in reversed order
for i in range(n):
if len(pool) == 0:
F1,F2 = F2, F1+F2
pool = list(str(F2))
pool.reverse()
yield int(pool.pop())
def digitDist(n,k = 1, summary = True):
#returns the distribution of the k-digit blocks
#first n digits of the Fibonacci constant
#.1123581321...
#if summary = True (the default)
#a statistical summary is returned:
#(min, max, median, mean, standard deviation)
#otherwise, the whole distribution is returned as a list
counts = [0] * 10**k
digits = fibConst(n)
num = 0
for i in range(k):
num = 10*num + next(digits)
counts[num] = 1 #record initial block of length k
for i in range(k,n):
num = (10 * num + next(digits)) % 10**k
counts[num] += 1
if not summary:
return counts
else:
minCount = min(counts)
maxCount = max(counts)
med = statistics.median(counts)
m = statistics.mean(counts)
sd = statistics.pstdev(counts)
return (minCount,maxCount,med,m,sd)
The first function is a generator (aka lazy list) which produces successive digits on demand. It doesn't attempt to keep the full n digits in memory.
Typical runs:
>>> for k in range(1,6): print(digitDist(10**6,k))
(99445, 100743, 100013.0, 100000.0, 351.6694470664178)
(9819, 10245, 10002.0, 9999.99, 91.5220732938235)
(894, 1096, 1000.0, 999.998, 31.575560105879358)
(61, 142, 100.0, 99.9997, 10.070278045317318)
(0, 28, 10.0, 9.99996, 3.1738399453028503)
>>> for k in range(1,7): print(digitDist(10**7,k))
(997286, 1003133, 999964.0, 1000000.0, 1490.5531188119396)
(99373, 100868, 100014.5, 99999.99, 322.87339608583426)
(9650, 10363, 10001.5, 9999.998, 98.10266049399476)
(890, 1140, 1000.0, 999.9997, 31.520718581751908)
(56, 143, 100.0, 99.99996, 9.967519249963855)
(0, 28, 10.0, 9.999995, 3.1578519597940304)
While this data seems to on the whole support the conjecture that the number is normal, the bottom lines of these runs are both surprising. When you look at the first million digits some 5-digit sequences appear not at all and some appear 28 times. A bit of sleuthing uncovered that the sequence 24242 appeared zero times and the sequence 48087 appeared 28 times. I don't now what to make of this, though it is enough to make me a little more hesitant in conjecturing normality.
Final remark: if you want a string representation of the initial part of the constant you can write a function like:
def strFib(n):
return '0.' + ''.join(str(d) for d in fibConst(n))
For example,
>>> strFib(20)
'0.11235813213455891442'
>>> strFib(100)
'0.1123581321345589144233377610987159725844181676510946177112865746368750251213931964183178115142298320'<|endoftext|>
TITLE: Sample points from a multivariate normal distribution using only the precision matrix?
QUESTION [7 upvotes]: I have a problem where I can directly compute the (sparse) precision matrix (inverse of the covariance) of a multivariate normal distribution, but the covariance itself is not sparse and I don't want to invert things. I would like to sample points using the precision matrix. What's a fast way to do this?
I do know that the standard procedure is to compute the cholesky decomposition of the covariance, that is find lower triangular matrix L such that $LL' = \Sigma$ and then use a univariate generator to compute a vector of iid normal points $u$ so that finally the vector $z = \mu + Lu$ has the correct covariance. But that would require me computing the covariance AND then taking its Cholesky decomposition. Is there anything faster, using the fact that I have the precision matrix?
REPLY [3 votes]: As is pointed out in the statement, any matrix decomposition $\mathbf{L}$ such that $\mathbf{LL}^\top = \boldsymbol{\Sigma}$ gives you a way to sample from the multivariate Gaussian distribution. Simply set $\boldsymbol{z} = \boldsymbol{\mu} + \mathbf{L}\boldsymbol{y}$, where $\boldsymbol{y}$ is a vector of independent univariate Gaussian variates and $\boldsymbol{\mu}$ your mean vector.
The matrix decomposition is not unique, so an efficient (and convenient way) to use the sparsity is to find the Cholesky decomposition of the precision matrix.
Starting with the positive-definite precision matrix $\boldsymbol{\Sigma}^{-1}$, compute its sparse Cholesky decomposition as $\boldsymbol{\Sigma}^{-1}=\mathbf{T}\mathbf{T}^\top$, where $\mathbf{T}$ is a lower triangular (sparse) matrix. The inverse of that Cholesky root, the lower triangular matrix $\mathbf{T}^{-1}$, can then be obtained by back-solving. There are dedicated algorithms to do these calculations efficiently using the sparse structure (these may require reordering).
Since $\mathbf{T}^{-\top} \mathbf{T}^{-1} = \boldsymbol{\Sigma}$, you can use the matrix $\mathbf{L} \equiv \mathbf{T}^{-\top}$ in your algorithm.<|endoftext|>
TITLE: How do I prove this $\int_{0}^{\infty}{e^{-x^n}-e^{-x^m}\over x\ln{x}}dx={\ln{\left(m\over n\right)}}?$
QUESTION [8 upvotes]: How do I prove this
$$\int_{0}^{\infty}{e^{-x^n}-e^{-x^m}\over x\ln{x}}dx=\color{blue}{\ln{\left(m\over n\right)}}.\tag1$$
I know of the standard integral
$$\int_{0}^{1}{x^m-x^n\over \ln{x}}dx=\ln\left({m+1\over n+1}\right)\tag2$$
I can't seem to find a suitable substitution for $(1)$
Can someone give a hint please? Thank you.
REPLY [8 votes]: Take $\log\left(x\right)=v$. We have
\begin{align}
I&=\int_{0}^{\infty}\frac{\exp\left(-x^{n}\right)-\exp\left(-x^{m}\right)}{x\log\left(x\right)}\ dx\\[10pt]
&=\int_{-\infty}^{\infty}\frac{\exp\left(-e^{vn}\right)-\exp\left(-e^{vm}\right)}{v}\ dv\\[10pt]
&=\int_{0}^{\infty}\frac{\exp\left(-e^{vn}\right)-\exp\left(-e^{vm}\right)}{v}\ dv-\int_{0}^{\infty}\frac{\exp\left(-e^{-vn}\right)-\exp\left(-e^{-vm}\right)}{v}dv
\end{align}
so if we apply the Frullani's theorem to the function $f\left(x\right)=\exp\left(-e^{x}\right)$ and $g\left(x\right)=\exp\left(-e^{-x}\right)$ respectively we get
$$I=\frac{1}{e}\log\left(\frac{m}{n}\right)-\left(\frac{1}{e}-1\right)\log\left(\frac{m}{n}\right)=\color{red}{\log\left(\frac{m}{n}\right)}$$
as wanted.
Addendum. It is interesting to note that we can easily generalize the result. We have the following:
Theorem. If $f:\left(0,\infty\right)\rightarrow\mathbb{R}
$ is a function such that $\lim_{x\rightarrow0}f\left(x\right)=f\left(0\right)\in\mathbb{R}
$ and $\lim_{x\rightarrow\infty}f\left(x\right)=f\left(\infty\right)\in\mathbb{R}
$ and is integrable over any interval $00
$ we get $$\int_{0}^{\infty}\frac{f\left(x^{n}\right)-f\left(x^{m}\right)}{x\log\left(x\right)}dx=\left(f\left(0\right)-f\left(\infty\right)\right)\log\left(\frac{m}{n}\right).
$$
Proof: We have $$I=\int_{0}^{\infty}\frac{f\left(x^{n}\right)-f\left(x^{m}\right)}{x\log\left(x\right)}dx\overset{\log\left(x\right)=v}{=}\int_{-\infty}^{\infty}\frac{f\left(e^{vn}\right)-f\left(e^{vm}\right)}{v}dx
$$ $$=\int_{0}^{\infty}\frac{f\left(e^{vn}\right)-f\left(e^{vm}\right)}{v}dx-\int_{0}^{\infty}\frac{f\left(e^{-vn}\right)-f\left(e^{-vm}\right)}{v}dx
$$ and now since we have the hypothesis of the classic Frullani's theorem we get $$\begin{align}
I= & \left(f\left(1\right)-f\left(\infty\right)\right)\log\left(\frac{m}{n}\right)-\left(f\left(1\right)-f\left(0\right)\right)\log\left(\frac{m}{n}\right)\\
= & \left(f\left(0\right)-f\left(\infty\right)\right)\log\left(\frac{m}{n}\right).\\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\square
\end{align}$$<|endoftext|>
TITLE: Is there a difference between $y(x)$ and $f(x)$
QUESTION [12 upvotes]: Oftentimes functions described by $f(x) = 2x+4$, and when this is mapped to the Cartesian plane, $f(x) = y$. This surely implies that $y = 2x+4$. Is there a difference between this and $y(x) = 2x+4$?
REPLY [12 votes]: Functions vs. coordinates
Consider plots of two different functions: $f(x)$ and $g(x)$ on the same $xy$ plane. One curve will be labeled $y=f(x)$ which means "this is a set of $(x,y)$ points that satisfy $y=f(x)$ condition". The other will be labeled $y=g(x)$.
Which of these should define $y(x)$? Both? – certainly not, because $f$ and $g$ are different functions. I say: neither. The statement $y=f(x)$ is just a condition for some set of points (i.e. $(x,y)$ pairs) while $y=g(x)$ is another condition for another set of points.
Explicit definition in a form $y(x)=…$ does define a function (well, does or doesn't, read the next paragraph). In this case $y$ is just an arbitrary name and may replace $f$. The same symbol $y$ may be a coordinate on $xy$ plane, which was $xf$ plane before the name replacement. (It is only a custom to have $xy$ plane.) This "union" of function name and coordinate name may cause a problem when there is another function $g(x)$ to plot.
It should be obvious that if $y$ replaces $f$ it cannot replace $g$ that is different than $f$.
For that reason it is a good thing to have coordinates with symbols which are not function names.
Definitions vs. equations or conditions
Another problem: we often write function definitions the same way as conditions to be met or equations to solve. Compare the two:
$$cos(x) = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{(2n)!} $$
$$cos(x) = \frac{1}{2}$$
The former may be treated as non-geometric definition of $cos$ function. The latter is just the equation to solve for $x$. We have some experience and often feel the difference, but a person (say: Bob) completely unaware of $cos$ will be confused. Bob may find every $x$ that satisfies
$$\sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{(2n)!} = \frac{1}{2}$$
and still will not be able to tell what number $cos(x)$ is equal to for any other $x$.
It's worse than that! Bob cannot tell what number $cos(x)$ is equal to even for $x$ being his solution, because he cannot be sure that either equation defines the function (we know it's the first one, Bob doesn't). To clarify that, let's see what happens when I change $cos$ to $sin$ only:
$$sin(x) = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{(2n)!} $$
$$sin(x) = \frac{1}{2}$$
We know by experience that neither of above defines $sin$. Yet these are legitimate equations to solve either separately or as a system (with empty set solution). Bob (not knowing about $sin$) may only assume that one of the equations is a definition – this will be wrong, his set of solutions will not be empty.
That's why I like the notation $f(x) \equiv …$ or the word "def" above the equality sign, or the explicit statement ("let us define…") – just to cut out possible ambiguity.
I've got the impression that you meant $y(x) \equiv 2x+4$ because there is no other expression in your example that you may want to define as $y(x)$.
Summary
Is there a difference between $y=2x+4$ and $y(x)=2x+4$?
My answer is: in general it may be. The second form is more likely to be read as a definition of a function, yet any form may or may not be intended to be a definition. Both may be equations to solve, when $y(x)$ is defined elsewhere ($y$ may be a given number or a parameter not depending on $x$, still it can be formally written as $y(x)$). The second form states that there is some function $y(x)$; the first one may mention a function or a variable (coordinate) $y$. The coordinate (not function) interpretation allows the first form to be a condition for points (i.e. $(x,y)$ pairs) – as it may be in your example – that leaves room for another conditions for another sets of points.
Is there a difference between $y=2x+4$ and $y(x) \equiv 2x+4$?
Yes. The second form defines a function for sure. The first one may have another meaning (explained above).
Is there a difference between $y \equiv 2x+4$ and $y(x) \equiv 2x+4$?
There is a subtle one: from the second form we know the independent variable is $x$; it may be $x$ xor $y$ in the first one.
Is there a difference between $y(x)$ and $f(x)$?
No. In a sense: you can name your function with any unused symbol. But if $y$ is already in use (e.g. to name a different function, coordinate, parameter) then you cannot freely rename $f$ to $y$.<|endoftext|>
TITLE: The cube of any number not a multiple of $7$, will equal one more or one less than a multiple of $7$
QUESTION [11 upvotes]: Yeah so I'm kind of stuck on this problem, and I have two questions.
Is there a way to define a number mathematically so that it cannot be a multiple of $7$? I know $7k+1,\ 7k+2,\ 7k+3,\ \cdots$, but that will take ages to prove for each case.
Is this a proof? $$(7k + a)^3 \equiv (0\cdot k + a)^3 \equiv a^3 \bmod 7$$
Thank you.
REPLY [5 votes]: Say $a$ is a number not a multiple of $7$.
Then we have that $$a \equiv \pm 1 ,\pm 2,\pm 3 \pmod7 $$
$$\implies a^3 \equiv (\pm 1)^3 ,(\pm 2)^3,(\pm 3)^3\equiv \pm 1 ,\pm 8,\pm 27 \pmod7 $$
$$\implies a^3 \equiv \pm 1 \pmod7 $$
Hope this is the shortest proof possible.<|endoftext|>
TITLE: How many applicants need to apply in order to meet the hiring target?
QUESTION [5 upvotes]: Cam needs to hire $30$ new employees. Ten percent $(10\%)$ of applicants do not meet the basic business requirements for the job, $12\%$ of the remaining applicants do not pass the pre-screening assessment, $23\%$ of those remaining applicants do not show up for the interview, and $5\%$ of those remaining applicants fail the background investigation. How many applicants need to apply in order to meet the hiring target?
$$A)\ 30\ \ \ \ \ \ B)\ 45\ \ \ \ \ \ C)\ 50\ \ \ \ \ \ D)\ 52\ \ \ \ \ \ E)\ 60$$
My answer:
I added $10+12+23+5=50$
That gave us $50\%$. I took a look at the answers and getting $50\%$ of $E)\ 60$ is $30.$
However, when I tried to solve it, I took $(0.50)(30)=15$ I then added $15$ to $30$ and it gave me $B)$ $45.$ Can someone please show me how to solve this?
REPLY [2 votes]: If you have studied successive discounts,
the question is exactly equivalent to finding the whole dollar marked price of something
that is sold for $\$30$ at successive discounts of $10\%, 12\%, 23\%\; and\; 5\%$
$N(1-0.1)(1-0.12)(1-0.23)(1-0.05) = 30\; \Rightarrow N = 52$<|endoftext|>
TITLE: Is every unramified extension of DVRs simple?
QUESTION [6 upvotes]: Let $A$ be a discrete valuation ring with maximal ideal $\mathfrak{m}$, fraction field $K$, and $L$ a finite separable extension of $K$ degree $n$, unramified w.r.t. $A$. Let $B$ be the integral closure of $A$ in $L$. Is it true that $B$ has the form $A[x]/(f)$ for some $f\in A[x]$?
Here's what I have so far:
Certainly $B$ is finite etale over $A$ of degree $n$, and if $\alpha\in B$ is a generator of $L$ over $K$ with monic minimal polynomial $f\in A[x]$, then $A[x]/(f)$ is finite flat over $A$ of degree $n$, so if $A[x]/(f)$ is etale over $A$, then the natural injection $A[x]/(f)\hookrightarrow B$ would have to be etale of rank 1, hence an isomorphism. Thus, it suffices to show that $A[x]/(f)$ is etale over $A$, or equivalently, that the image $\overline{f}$ of $f$ in $(A/\mathfrak{m})[x]$ is a separable polynomial.
Another question: is the unramified condition necessary? What's an example of a finite non-simple extension of DVR's?
REPLY [2 votes]: The answer to your question is no. You can try to do the following exercise (taken from Serre, Local Fields, Ch. III, section 6):
Define $A,B,K,L$ in the way you did above, and assume $B$ is 'completely decomposed', i.e., there are $n = [L:K]$ primes of $B$ above the prime $\mathfrak m$ of $A$. Then $B$ is of the form $A[x]$ (for some $x\in B$) if and only if $n\leq \text{Card }\overline K$, with $\overline K$ the residue field of $K$.
This also shows that the second question is irrelevant.<|endoftext|>
TITLE: Prove that the polynomial $\prod\limits_{i=1}^n\,\left(x-a_i\right)-1$ is irreducible in $\mathbb{Z}[x]$.
QUESTION [6 upvotes]: Let $n>1$ be an integer. For $a_1,a_2,\ldots,a_n\in\mathbb{Z}$ with $a_1< a_2< a_3 < \dots < a_n$, prove that the polynomial $$f(x)=(x-a_1)(x-a_2)\cdots(x-a_n)-1\,.$$ is irreducible in $\mathbb{Z}[x]$.
Please help! Thanks!
REPLY [2 votes]: This is an alternative answer to your question.
Consider the polynomial $f^2$ instead. So $f^2|_{\{a_i\}}=1$. If $f$ is reducible over $\mathbb{Z}[x]$, then it must be reducible to $f^2=g^2h^2$ for $g,h \in \mathbb{Z}[x]$. So $f^2=g'h'$. Suppose $\deg(g')\leq n$. $g'|_{\{a_i\}}=1$ as $g$ is positive and $g'(a_i)$'s are all invertibles in $\mathbb{Z}$. So $(g'-1)|_{\{a_i\}}=0$ implies $a_i$ are all its roots. Since $\deg(g)\leq n$, $g'=\prod_i(x-a_i)+1$ by Bézout's theorem. Similarly for $h'$. Now $h'g'\neq f^2$. Thus $f$ is irreducible over $\mathbb{Z}[x]$.<|endoftext|>
TITLE: Dumbbell Contour? $\int_0^1 \log(x)\log(1-x)dx$ via complex methods.
QUESTION [7 upvotes]: Having evaluated this integral via the power series and various approaches via special functions, I'm now curious if there is a direct way to compute this integral by taking a slit along $[0,1]$ and using a 'dogbone' contour with the residue at infinity to obtain the famous result. My own attempts haven't led to much progress.
Please note: I have seen the many threads related to this integral, but not one that has approached the integral in this way so I hope this does not get flagged up as a duplicate.
REPLY [4 votes]: The reason that the dog bone contour is inapplicable here is that branch cuts from, say, $0$ to $\infty$ and $1$ to $\infty$ do not "collapse" into the "slit" from $0$ to $1$.
To see this, we note that for $z=x+iy$, $x>1$ and $y\to 0+$ we have $\arg(z)=0$ and $\arg(1-z)=-\pi$, while for $x>1$ and $y\to 0^-$, we have $\arg(z)=2\pi$ and $\arg(z)=\pi$.
Therefore, on the upper part of the coalescing branch cuts for which $x>1$
$$\begin{align}
f(z)&=\log(z)\log(1-z)\\\\
&=(\log(x)+i0)(\log(|1-x|)-i\pi)\\\\
&=\log(x)\log(|1-x|)-i\pi \log(x)
\end{align}$$
while on the lower part of the coalescing branch cuts for which $x>1$
$$\begin{align}
f(z)&=\log(z)\log(1-z)\\\\
&=(\log(x)+i2\pi)(\log(|1-x|)+i\pi)\\\\
&=\log(x)\log(|1-x|)+i\pi \log(x)+i2\pi \log(|1-x|)-2\pi^2
\end{align}$$
Therefore, the function $f(z)$ is not continuous across the coalescing branch cuts and they do not, therefore, collapse into a "slit."
NOTE:
This situation is different from the case for which $f(z)=z^{1/2}(1-z)^{1/2}$. Following the preceding analysis, we find that on the upper part of the coalescing branch cuts
$$\begin{align}
f(z)&=\sqrt{z} \sqrt{1-z} \\\\
&=\sqrt{x}\sqrt{x-1}e^{-i\pi/2}\\\\
&=-i\sqrt{x}\sqrt{x-1}
\end{align}$$
while on the lower part of the coalescing branch cuts for which $x>1$
$$\begin{align}
f(z)&=\sqrt{z}\sqrt{1-z}\\\\
&=\sqrt{x}e^{i\pi}\sqrt{x-1}e^{i\pi/2}\\\\
&=-i\sqrt{x}\sqrt{x-1}
\end{align}$$
Therefore, the function $f(z)$ is continuous across the coalescing branch cuts and they do, therefore, collapse into a "slit."<|endoftext|>
TITLE: An application $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ and $C^{1}$ such that $f(x)=0$ for $x>r$ implies the value of jacobian integral is zero
QUESTION [9 upvotes]: Let $f \colon \mathbb{R}^n \to \mathbb{R}^n$ of class $C^{1}$. Suppose that exists $r>0$ such that $f(x)=0$ if $|x|\geq r$ .Prove that exists $k>0$ such that:
$\displaystyle \int_{B[0,k]}$ det$Jf(x)=0$
I have yet to see Stokes theorem which i think is related to this question and i'm trying to adapt the proof of "change of variable" theorem without much success,any hint is appreciated.
REPLY [3 votes]: This integral has a similar form to the change of variables formula, with one crucial difference: There is no absolute value of the Jacobian here. I think it is unllikely you could find a nice proof by trying to change coordinates. (I think any such proof would essentially boil down to using Stoke's without explicitly saying so.)
Stoke's Theorem simply states that the integral of the Jacobian determinant in the ball is the same the integral of $f(x)$ over the boundary of the ball. If we pick k to be larger than r, then this integral is 0 since $f(x)$ is zero outside the ball of radius r.
Stoke's theorem is essentially a generalization of the Fundamental Theorem of Calculus to higher dimensions. and $detJf(x)$ is what we use in place of the derivative of $f(x)$.<|endoftext|>
TITLE: Is $\bigcup_{x \in A} [x - 1, x + 1]$ Lebesgue measurable, where $A$ is a Lebesgue measurable subset of $\mathbb{R}$?
QUESTION [6 upvotes]: Suppose $A$ is a Lebesgue measurable subset of $\mathbb{R}$ and $$B = \bigcup_{x \in A} [x - 1, x + 1].$$Is $B$ Lebesgue measurable?
REPLY [5 votes]: We have:
$$B = \bigcup_{x \in A} (x-1, x+1) \cup \bigcup_{x \in A} \{x-1\} \cup \bigcup_{x \in A} \{x +1\} = O \cup (A - 1) \cup (A + 1)$$
$O$ is open so Lebesgue measurable, and $A \pm 1$ are Lebesgue measurable since so is $A$.
REPLY [2 votes]: The answer is yes, even for $A$ being an arbitrary index sets. The following is from Problem 2.J. of General Topology by Kelley:
A proper interval is defined to be an open/closed/half-open interval with different endpoints. Let $\mathscr{C}$ be an arbitrary family of proper intervals. Then there is a countable subset $\mathscr{B}$ of $\mathscr{C}$, such that $\bigcup \mathscr{B} = \bigcup\mathscr{C}$.
Put $C = \bigcup \mathscr{C}$. First, we prove that all but at most countably many points in $C$ are interior points of some $I\in \mathscr{C}$. If $x\in C$ is not an interior point of any $I\in \mathscr{C}$, then it's an end point of some (half-)closed interval in $\mathscr{C}$. Suppose we have uncountably many such $x$. WLOG, suppose that uncountably many of them are left end points. Let $S$ be the set of them.
To each $s\in S$ there corresponds a proper interval $[s,t) \subset I$ for some $I\in \mathscr{C}$, such that $[s,t)\cap S = \{s\}$ (because other points in $S$ are not interior points of $I$). Hence these proper intervals $[s,t)$ are pairwise disjoint (otherwise, say $[s_1,t_1)$ intersects $[s_2,t_2)$ and $s_1 < s_2$, then $s_2 \in [s_1,t_1)$). But $\mathbb{R}$ allows at most countably many pairwise disjoint proper intervals (because it's separable), which is a contradiction.
Let $C'$ be the set of points $x\in C$ such that $x\in I^\mathrm{o}$ for some $I\in \mathscr{C}$, the previous discussion shows that $C-C'$ is at most countable. Note that $\{I^\mathrm{o}:I\in \mathscr{C}\}$ is an open cover of $C'$. Since $\mathbb{R}$ is secound-countable, this open cover admits a countable subcover. Let $\mathscr{B}$ be the set of proper intervals corresponding to this subcover, together with one element of $\mathscr{C}$ for each $x\in C-C'$ that contains $x$. Then $\mathscr{B}$ is countable, and $\bigcup \mathscr{B} = C$.<|endoftext|>
TITLE: Prove that $A$ cannot be invertible if $A^2=0$
QUESTION [5 upvotes]: Let $A$ be an $n\times n$ matrix for which $A^2=0$. Prove that $A$ can not be invertible.
My attempt:
Given $A^2 = 0$, this means that $A = 0$. If $A$ is invertible, there must be an $n \times n$ matrix $B$ such that $AB = I$. However, because $A = 0$, this is not possible, thus $A$ is not invertible.
REPLY [3 votes]: If $A$ is invertible, then $A^2$ is invertible too. Thus $$I=(A^2)^{-1}A^2=(A^2)^{-1}0=0,$$ a contradiction.<|endoftext|>
TITLE: Irreducible polynomial over $\mathbb{Q}$ implies polynomial is irreducible over $\mathbb{Z}$
QUESTION [15 upvotes]: Let $f(x) \in \mathbb{Z}[x]$ be a polynomial of degree $\geq 2$. Then choose correct
a) if $f(x)$ is irreducible in $ \mathbb{Z}[x] $ then it is irreducible in $ \mathbb{Q}[x] $.
b) if $f(x)$ is irreducible in $ \mathbb{Q}[x] $ then it is irreducible in $ \mathbb{Z}[x] $.
(1) is definitely true, for (2) $f(x)=2(x^2+2)$ clearly irreducible over $\mathbb{Q}[x]$
But I am confused about whether $f(x)$ is irreducible over $\mathbb{Z}[x]$ or not? According to Gallian, as 2 is non unit in $\mathbb{Z}$, $f(x)$ is reducible over $\mathbb{Z}[x]$, (2) is false.
But definition of irreducible polynomial on Wikipedia says a polynomial is reducible if it can be written as product of non constant polynomials hence $f(x)$ is irreducible over $\mathbb{Z}[x]$ accordingly (2) is true .
REPLY [8 votes]: Consider the polynomial $p(x)=3x+3$. Since the coefficients are integer $p(x)$ belongs to $\mathbb{Z}[x] \subset\mathbb{Q}[x] $. We can rewrite it as $3(x+1)$, but now: $3$ is a unit in $\mathbb{Q}$ since it is inveritble, then the polynomial is irreducible, but $3$ is not inveritble in $\mathbb{Z}$, so the factorization above show that the polynomial is reducible as product of irreducible element in $\mathbb{Z}[x]$. So the statement 2 is false.<|endoftext|>
TITLE: Finding this operator's spectrum
QUESTION [5 upvotes]: In an exam, my professor gave the following exercise:
State and prove the spectral theorem for compact operators. Let $K$ be the operator defined by:
$$Kf(t)=\int_0^1\min(t,s)f(s)\mathrm{d}s.$$
(i) Show $K$ is a bounded operator from $L^2([0,1])$ to itself;
(ii) Show $K$ is a compact and self-adjoint operator;
(iii) Determine the spectrum of $K$; is the point 0 an eigenvalue?
(iv) Calculate the operator norm of $K$.
Apart from wondering whether he meant compact self-adjoint operators or wanted statement and proof of the properties of the spectrum, i.e. 0 being an element and there being only a finite number of finite-multiplicity eigenvalues of $K$ outside any disk centered at 0, I can handle the proof part. This is a Hilbert-Schmidt operator, so we analyze the kernel. We show it is $L^2([0,1]^2)$ as follows:
\begin{align*}
\int_{[0,1]^2}\min(t,s)^2\mathrm{d}s\mathrm{d}t={}&\int_0^1\int_0^1\min(t,s)^2\chi_{t\leq s}(s)\mathrm{d}t\mathrm{d}s+{} \\
&{}+\int_0^1\int_0^1\min(t,s)^2\chi_{s\leq t}(s)\mathrm{d}s\mathrm{d}t={} \\
{}={}&\int_0^1\int_0^st^2\mathrm{d}t\mathrm{d}s+{} \\
&{}+\int_0^1\int_0^ts^2\mathrm{d}s\mathrm{d}t={} \\
{}={}&\int_0^1\frac{s^3}{3}\mathrm{d}s+\int_0^1\frac{t^3}{3}\mathrm{d}t={} \\
{}={}&\frac{1}{12}+\frac{1}{12}=\frac16.
\end{align*}
So the operator is both compact and bounded, and has operator norm at most $\frac16$. The kernel is evidently real and symmetric w.r.t. variable swapping, hence the operator is self-adjoint.
But how do I find the spectrum? How do I show 0 is(n't) an eigenvalue? And how can I find the spectral norm? I tried producing an eigenvector for zero, and should have proved monomials, $\sin(2\pi kx)$ and $\cos(2\pi kx)$ are not. So now I am a bit at a loss. I tried supposing $Kf=0$ and ended up with the following:
$$\int_0^tsf(s)\mathrm{d}s+t\int_t^1f(s)\mathrm{d}s=0,$$
for all $t\in[0,1]$. Now what? Then I tried using monomials to get the norm as being exactly the norm of the kernel. So I normalized $s^n$ in $L^2$, getting $\sqrt{2n+1}s^n$. I plugged that into $K$ and got $\frac{\sqrt{2n+1}}{n+1}t[1-\frac{t^{n+1}}{(n+1)(n+2)}$. Then I plugged this into wolfram, and got this, which evalued at $t=1$ gives this. Now the minimum appears to be 0, but then again it is attained for $n=-\frac12$ which gives $\frac{1}{\sqrt{x}}\notin L^2$, so at least this is good to go. But calculating that max gave me something around 0.3, which is more than $\frac16$ which I calculated for the norm of the kernel. So have I miscalculated something? And how do I find that spectrum?
REPLY [6 votes]: It is clear that $0$ is in the spectrum of $K$, since $K$ is compact. Let us address if it is an eigenvalue: if $$ Kf=0,$$ this means we have
$$\tag{1}
0=t\int_t^1f(s)\,ds+\int_0^ts\,f(s)\,ds.
$$
Differentiating (via Lebesgue's Differentiation Theorem),
$$\tag{2}
0=\int_t^1 f(s)\,ds-tf(t)+tf(t)=\int_t^1f(s)\,ds,\ \ \ \text{a.e.}.
$$ Then, for any $v,t\in[0,1]$,
$$
\int_t^vf(s)\,ds=\int_t^1f(s)\,ds-\int_v^1f(s)\,ds=0.
$$
Applying Lebesgue's differentiation again, we get that $f(t)=0$ a.e. It follows that $0$ is not an eigenvalue.
For the nonzero part of the spectrum: since $K$ is compact, all nonzero elements of the spectrum are eigenvalues. Assume that $\lambda$ is an eigenvalue: then
$$\tag{3}
\lambda f(t)=t\int_t^1f(s)\,ds+\int_0^ts\,f(s)\,ds.
$$
The right-hand-side of $(3)$ is continuous, so $f$ is continuous. But, knowing that $f$ is continuous, the right-hand side is differentiable; so $f$ is differentiable. Repeating the reasoning, we get that $f$ is infinitely differentiable. If we differentiate $(3)$, we get
$$\tag{4}
\lambda f'(t)=\int_t^1 f(t)\,ds-tf(t)+tf(t)=\int_t^1 f(t)\,ds.
$$
Differentiating once again,
$$\tag{5}
\lambda f''(t)=-f(t).
$$ The solution to this DE is
$$\tag{6}
f(t)=\alpha\,\cos\frac{t}{\sqrt\lambda}+\beta\,\sin\frac{t}{\sqrt\lambda}.
$$
Putting this $f$ into $(3)$, we get, after simplifying,
$$
0=\alpha t\sqrt\lambda\sin\frac1{\sqrt\lambda}+\beta t\sqrt\lambda\cos\frac1{\sqrt\lambda}-\alpha\lambda.
$$
Evaluating at $t=0$, we get $\alpha=0$. So the equation will be satisfied when $\beta\ne0$ and for every $t>0;$ thus
$$
\cos\frac1{\sqrt\lambda}=0.
$$
This forces $$\frac1{\sqrt\lambda}=\frac\pi2+k\pi,$$ so the eigenvalues are
$$
\lambda_k=\frac4{(2k+1)^2\pi^2},\ \ k\in\mathbb N.
$$
Finally, for the operator norm: since $K$ is selfadjoint, we have
$$
\|K\|=\sup\{|\lambda|:\ \lambda\in\sigma(K)\}=\sup\left\{\frac4{(2k+1)^2\pi^2}:\ k\in\mathbb N\right\}=\frac4{9\pi^2}.
$$<|endoftext|>
TITLE: $0\to C'\to C\to C''\to0$ splits if $C\cong C'\oplus C''$ as a chain complex?
QUESTION [5 upvotes]: Question
Given a unitary ring $A$ and an exact sequence $$0\to C'\xrightarrow iC\xrightarrow pC''\to0$$ in the Abelian category of chain complexes over $A$, where $C,C',C''$ are chain complexes of finitely-generated free modules (I don't know whether this could be replaced by projective modules). If $C\cong C'\oplus C''$ as chain complexes, is it true that the original exact sequence splits in the Abelian category of chain complexes?
Results
If $A$ is a field or a PID, and that the complexes (the total complexes seen as $A$-modules) are of finite rank, then the statement could be proved as follows:
Let $\mathcal A$ be the Abelian category of chain complexes over $A$. Take $\operatorname{Hom}_{\mathcal A}(C'',-)$, we have an exact sequence:
$$0\to\operatorname{Hom}(C'',C')\xrightarrow{i_*}\operatorname{Hom}(C'',C)\xrightarrow{p_*}\operatorname{Hom}(C'',C'')$$
Since $C\cong C'\oplus C''$, we have $\operatorname{Hom}(C'',C)\cong\operatorname{Hom}(C'',C')\oplus\operatorname{Hom}(C'',C'')$, and we should note that all these $\operatorname{Hom}$'s are submodules of free modules, hence free ($A$ is a PID). It follows from dimension counting that $p_*$ is surjective, hence the original exact sequence splits.
Backgrounds
It's a generalization of Roth's theorem. Given matrices $A,B,C$ over a commutative ring $R$ and let
$$P=\begin{bmatrix}B&0\\&C\end{bmatrix}$$ and $$P_A=\begin{bmatrix}B&A\\&C\end{bmatrix}$$
Then
If $P,P_A$ are equivalent, then there exists $X,Y$ such that $A=BX-YC$.
If $B,C$ are square matrices and $P,P_A$ are similar, then there exists $X$ such that $A=BX-XC$.
The first statement follows from the question (if it's true): we consider two complexes $$K\colon\to0\to K_0=R^\bullet\xrightarrow CR^\bullet\to0\to$$ and $$L\colon\to R^\bullet\xrightarrow BL_0=R^\bullet\to0\to0\to$$
The chain homomorphism $f$ is given by the matrix $A\colon K_0\to L_0$ (and zero on any other degree). Consider the canonical exact sequence involving a mapping cone:
$$0\to L\to\operatorname{cone}(f)\to K[-1]\to0$$
Note that the matrix associated to the boundary operator of $\operatorname{cone}(f)$ is $P_A$ (up to some signs), which means that $\operatorname{cone}(f)\cong L\oplus K[-1]$. We apply the result of the question, and it follows directly that $f$ is null homotopic, hence we can solve the matrix equation.
The second statement follows from the first statement. If $P,P_A$ are similar, then $T-P,T-P_A$ are equivalent over the ring $R[T]$, hence there exists $P(T)\in\operatorname{Mat}(R[T])$ and $Q(T)\in\operatorname{Mat}(R[T])$ (we omit the computation of the magnitude of matrices) such that $(T-B)P(T)+Q(T)(T-C)=A$. If we factor $Q(T)=(T-B)Q_1(T)+Q_0$, we have
$$(T-B)(P(T)+Q_1(T)(T-C)+Q_0)+BQ_0-Q_0C=A.$$ Compare the remainder term, we obtain $BQ_0-Q_0C=A$.
Maybe related
I just found this post: A nonsplit short exact sequence of abelian groups with $B \cong A \oplus C$
REPLY [2 votes]: I think it may be true if the ring is noetherian, but not for general rings.
I'll build a counterexample in a few steps.
First, there are easy examples of non-split short exact sequences $0\to A'\to A\to A''\to0$ of bounded complexes of finitely generated vector spaces over a field $k$. For example, there's an obvious such sequence with
$$A'=\dots\to0\to0\to k\to0\to\dots,$$
$$A=\dots\to0\to k\stackrel{\sim}{\to} k\to0\to\dots,$$
$$A''\dots\to 0\to k\to 0\to0\to\dots.$$
Next, by taking the direct sum with countably many copies of the split short exact sequence
$$0\to A'\oplus A\oplus A''\to(A'\oplus A\oplus A'')^2\to A'\oplus A\oplus A''\to0,$$
we can construct a non-split short exact sequence $0\to B'\to B\to B''\to0$ of complexes of vector spaces with all the non-zero terms isomorphic to a countably infinite dimensional vector space $V$, and with $B\cong B'\oplus B''$.
For any object $V$ of an additive category, with $E=\operatorname{End}(V)$, the functor $\operatorname{Hom}(V,-)$ is an equivalence of categories from the category of finite direct sums of copies of $V$ to the category of finitely generated free right $E$-modules.
So finally, applying the functor $\operatorname{Hom}_k(V,-)$ to $0\to B'\to B\to B''\to0$, we get a non-split short exact sequence $0\to C'\to C\to C''\to0$ of complexes of finitely generated free $E$-modules with $C\cong C'\oplus C''$.<|endoftext|>
TITLE: Is it possible for an irreducible polynomial with rational coefficients to have three zeros in an arithmetic progression?
QUESTION [31 upvotes]: Assume that $p(x)\in \Bbb{Q}[x]$ is irreducible of degree $n\ge3$.
Is it possible that $p(x)$ has three distinct zeros $\alpha_1,\alpha_2,\alpha_3$ such that $\alpha_1-\alpha_2=\alpha_2-\alpha_3$?
As also observed by Dietrich Burde a cubic won't work here, so we need $\deg p(x)\ge4$. The argument goes as follows. If $p(x)=x^3+c_2x^2+c_1x+c_0$, then
$-c_2=\alpha_1+\alpha_2+\alpha_3=3\alpha_2$ implying that $\alpha_2$ would be rational and contradicting the irreducibility of $p(x)$.
This came up when I was pondering this question. There the focus was in minimizing the extension degree
$[\Bbb{Q}(\alpha_1-\alpha_2):\Bbb{Q}]$. I had the idea that I want to find a case, where $\alpha_1-\alpha_2$ is fixed by a large number of elements of the Galois group $G=\operatorname{Gal}(L/\Bbb{Q})$, $L\subseteq\Bbb{C}$ the splitting field of $p(x)$. One way of enabling that would be to have a lot of repetitions among the differences $\alpha_i-\alpha_j$ of the roots $\alpha_1,\ldots,\alpha_n\in\Bbb{C}$ of $p(x)$. For the purposes of that question it turned out to be sufficient to be able to pair up the zeros of $p(x)$ in such a way that the same difference is repeated for each pair (see my answer).
But can we build "chains of zeros" with constant interval, i.e. arithmetic progressions of zeros.
Variants:
If it is possible for three zeros, what about longer arithmetic progressions?
Does the scene change, if we replace $\Bbb{Q}$ with another field $K$ of characteristic zero? (Artin-Schreier polynomials show that the assumption about the characteristic is relevant.)
REPLY [30 votes]: The answer to your question is NO. Suppose by contradiction that such a polynomial $P$ exists, and denote by $S$ the set of roots. By hypothesis, some $a\in S$ can be written $a=\frac{b+c}{2}$ where $b,c$ are distinct elements of $S$. But since the Galois group acts transitively on $S$, this property holds for all $a\in S$. This motivates the following definition :
Definition. A (non-empty) set $S\subseteq {\mathbb C}$ is AP-extensive if any $a\in S$ can be written $a=\frac{b+c}{2}$ where $b,c$ are distinct elements of $S$.
Note that an AP-extensive $S\subseteq {\mathbb R}$ cannot have a largest element. In particular, any (non-empty) AP-extensive $S\subseteq {\mathbb R}$ is necessarily infinite. This still holds in $\mathbb C$ :
Lemma. If $S\subseteq {\mathbb C}$ is AP-extensive, then $S$ is infinite (or empty).
Proof of lemma. Suppose by contradiction that $S$ is finite and nonempty. Then the set $\lbrace b\in{\mathbb R}\ | \ \exists a, a+ib\in S\rbrace$ is finite also and therefore has a largest element $b_0$. Let $S_1=\lbrace z\in S \ | \ Im(z)=b_0 \rbrace$. It is easy to see that if $a=\frac{b+c}{2}$ with $a,b,c\in S$ and
further $a\in S_1$, then $b$ and $c$ must be in $S_1$ also. So $S_1$ is AP-extensive as well. Next,
let $S_2=S_1-ib_0$. Then $S_2$ is AP-extensive also, but by construction $S_2\subseteq {\mathbb R}$. So $S_2$ (and hence $S_1,S$ also) must be infinite which is impossible. This concludes the proof.<|endoftext|>
TITLE: Inequality of absolute values of complex sums
QUESTION [6 upvotes]: Let $c_1,\dots, c_n\geq 0$ and $x,\dots,x_n\in\mathbb R$. Then $$\left\lvert \sum_k \frac{c_k x}{(x_k-i x)^2}\right\rvert\leq \left\lvert \sum_k \frac{c_k }{x_k-i x}\right\rvert$$ seems to hold for all real $x\not=0$.
Why is this the case? It's trivial for $n=1$, but already for $n=2$ writing out these absolute values becomes rather messy. Am I overlooking something?
REPLY [6 votes]: The numbers $(x,x_1,\ldots,x_n)$ can be replaced by $(-x,-x_1,\ldots,-x_n)$ without changing the statement, so we may assume $x\ge0$.
Then
$$
RHS =
\left| \sum_k \frac{c_k}{x_k-ix} \right|
\ge \mathrm{Im\,} \sum_k \frac{c_k}{x_k-ix}
= \sum_k \mathrm{Im\,} \frac{c_k(x_k+ix)}{(x_k-ix)(x_k+ix)} = \\
= \sum_k \frac{c_kx}{|x_k-ix|^2}
= \sum_k \left|\frac{c_kx}{(x_k-ix)^2}\right|
\ge \left| \sum_k \frac{c_kx}{(x_k-ix)^2}\right|
= LHS.
$$<|endoftext|>
TITLE: Limit involving the Sine integral function
QUESTION [9 upvotes]: $$
\mbox{Prove that}\qquad
\lim_{x \to \infty}\left[\vphantom{\large A}%
x\,\mathrm{si}\left(x\right)+ \cos\left(x\right)\right]
= 0
$$
where we define
$$\mathrm{si}\left(x\right) =
- \int^{\infty}_{x}\frac{\sin\left(t\right)}{t}\,\mathrm{d}t
$$
I have no clue how to start. I have verified the result using wolframalpha.
REPLY [3 votes]: If we set, using standard notations,
$$ f(x) = -\frac{\pi x}{2}+\cos(x)+x\,\text{Si}(x) = x\,\text{si}(x)+\cos(x) \tag{1}$$
we have:
$$ f'(x) = -\frac{\pi}{2}+\text{Si}(x) = -\int_{x}^{+\infty}\frac{\sin t}{t}\,dt \tag{2}$$
and since $f(0)=1$,
$$ f(x) = 1-\int_{0}^{x}\int_{u}^{+\infty}\frac{\sin t}{t}\,dt\,du =1-\int_{1}^{+\infty}\frac{1-\cos(t x)}{t^2}\,dt\tag{3}$$
Now
$$ \lim_{x\to +\infty}\int_{1}^{+\infty}\frac{\cos(tx)}{t^2}\,dt = 0\tag{4}$$
by the Riemann-Lebesgue lemma, hence, by $(3)$,
$$ \lim_{x\to +\infty}f(x) = 1-\int_{1}^{+\infty}\frac{dt}{t^2} = \color{red}{0}.\tag{5}$$<|endoftext|>
TITLE: Finding $1/x^2 + 1/x^3 + 1/x^5 + \dots $
QUESTION [44 upvotes]: The following function came up in my work:
$$
f(x)=\sum_{p\text{ prime}}\frac{1}{x^p}=\frac{1}{x^2}+\frac{1}{x^3}+\frac{1}{x^5}+\frac{1}{x^7}+\frac{1}{x^{11}}+\cdots.
$$
Naturally, this converges for $x>1$ since the geometric series does. Does this function have a name? Is there a better way to calculate it than the straightforward sum? In my application I can bound $x$ away from 1 if it helps.
REPLY [6 votes]: The values of your function $f$ at positive integers $n$ correspond to the base-$n$ representations of the prime constant.
Indeed, $f$ is closely related to the characteristic function of the prime numbers. For instance, $f(2)$ evaluates to the prime constant $\rho$, defined as:
$$
\rho =\sum _{{p}}{\frac {1}{2^{p}}}=\sum _{{n=1}}^{\infty }{\frac {\chi _{{{\mathbb {P}}}}(n)}{2^{n}}},
$$
where $\chi_\mathbb{P}$ is the characteristic function of the primes, i.e., the function such that for positive integer $n$:
$$
{\displaystyle \chi_\mathbb{P}(n):={\begin{cases}1&{\text{if }}x\in \mathbb{P},\\0&{\text{if }}x\notin \mathbb{P},\end{cases}}}
$$
where $\mathbb{P}$ denotes the set of prime numbers.
The decimal expansion of $\rho$ begins with:
\begin{align}
\rho&=0.414682509851111660248109622\ldots \\
&=0.011010100010100010_2.
\end{align}
and is included in the OEIS as sequence A051006.
The values of $f$ for other integers $n$ correspond simply to the base-$n$ representations of the prime constant. If we denote by $\rho_n$ the base-$n$ representation of $\rho$, we have:
\begin{align}
f(3)=\sum _{{p}}{\frac {1}{3^{p}}}&=\rho_3 \\
&=0.011010100010100010_3 \\
&=0.152726266\ldots...
\end{align}
Therefore $f(n)=\rho_n$ for positive integers $n$.<|endoftext|>
TITLE: Monotone functions and non-vanishing of derivative
QUESTION [19 upvotes]: The following result is well known:
If $f$ is continuous on $[a, b]$, differentiable on $(a, b)$ and $f'$ is non-zero on $(a, b)$ then $f$ is strictly monotone on $[a, b]$.
However if the derivative vanishes at a finite number of points in $(a, b)$ and apart from these derivative maintains a constant sign in $(a, b)$ then also the function is strictly monotone on $[a, b]$ (just split the interval into finite number of intervals using these points where derivative vanishes and $f$ is strictly monotone in same direction in each of these intervals).
Let's suppose now that $f$ is strictly monotone and continuous in $[a, b]$ and differentiable in $(a, b)$. What can we say about set of points $$A = \{x \mid x \in (a, b), f'(x) = 0\}$$ Can it be infinite? Can it be uncountable? How large the set $A$ can be?
REPLY [14 votes]: It's clear that $A$ must have empty interior. At least for closed sets that's a characterization:
Theorem If $A\subset[0,1]$ is compact with empty interior then there exists a strictly increasing $f\in C^1([0,1])$ such that $A$ is the zero set of $f'$.
Exercise Modify the following proof to show that we can even get $f$ infinitely differentiable.
Proof. Say the connected components of $[0,1]\setminus A$ are $I_1,\dots$. Each $I_n$ is a relatively open interval; there exist $a_n$, $b_n$ with $$I_n\cap(0,1)=(a_n,b_n).$$
Choose $f_n\in C^1(\Bbb R)$ so that $f_n(t)=0$ for all $t\le a_n$, $f(t)=1$ for all $t\ge b_n$, and $f_n'(t)>0$ for all $t\in(a_n,b_n)$. Choose $c_n>0$ so that $$\sum_n c_n\sup_t(|f_n(t)|+|f_n'(t)|)<\infty,$$and let $$f=\sum_n c_nf_n.$$The hypothesis implies that $$f'=\sum_nc_nf_n',$$so $f$ is continuously differentiable and $$\{t\in[0,1]:f'(t)=0\}=A.$$And $f$ is strictly increasing: Say $0\le x0$. QED<|endoftext|>
TITLE: Continuous function with non-negative derivative a.e. implies non-decreasing?
QUESTION [5 upvotes]: Let $f \colon [a,b] \rightarrow \mathbb{R}$ be a continuous function on a compact interval of the real line. Suppose that $f$ is differentiable almost everywhere and that $f'(x) \geq 0$ at every point of differentiability. Is it true that $f$ is non-decreasing on [a,b]?
(If $f$ is $\textit{absolutely}$ continuous, this is certainly true, but I'm not so sure what happens if you weaken the assumption to mere continuity.)
REPLY [7 votes]: No. The Cantor-Lebesgue function $f$ is continuous, non-decreasing, non-constant, and satisfies $f'=0$ almost everywhere. It's not hard to see that $f$ is non-differentiable at every point of the Cantor set. So $g=-f$ is a counterexample to your question: $g$ is continuous, differentiable almost everywhere, satisfies $g'\ge 0$ at every point of differentiability, but $g$ is not non-decreasing.<|endoftext|>
TITLE: Bounds on $f(k ;a,b) =\frac{ \int_0^\infty \cos(a x) e^{-x^k} \, dx}{ \int_0^\infty \cos(b x) e^{-x^k}\, dx}$
QUESTION [8 upvotes]: Suppose we define a function
\begin{align}
f(k ;a,b) =\frac{ \int_0^\infty \cos(a x) e^{-x^k} \,dx}{ \int_0^\infty \cos(b x) e^{-x^k} \,dx}
\end{align}
can we show that
\begin{align}
|f(k ;a,b)| \le 1
\end{align}
for $ 00$
(its $n$-th derivative has sign $(-1)^n$ for all $t>0$),
and decays to zero as $t \to \infty$, whence it is
a nonnegative mixture of decreasing exponentials $e^{-ct}$ $(c>0)$ by
Bernstein's theorem.
Taking $t=x^2$ we deduce that $\exp(-|x|^k)$ is a nonnegative mixture of
Gaussians $\exp(-cx^2)$ $(c>0)$. Since the Fourier transform of $\exp(-cx^2)$
is positive and decreasing for all $c>0$, the same is true of the Fourier
transform of $\exp(-|x|^k)$, QED.<|endoftext|>
TITLE: Topology of CW-complex and attaching map
QUESTION [6 upvotes]: I think I must have a fundamental misconception in place right now in my mind.
When defining a CW-complex, we use inductively continuous maps from $f_{\partial \sigma} :S^n \to K^{(n)}$. We then define a natural map $f_{\sigma}: D^{n+1} \to K^{(n+1)}$ which proceeds with the construction. For this, we take the disjoint union $\bigsqcup\limits_{\sigma} D_{\sigma}^{n}$ over the cells we are attaching and define
$$K^{(n+1)} = \left(\bigsqcup\limits_{\sigma} D_{\sigma}^{n} \right) \cup_f K^{(n)} .$$
Now, why do we need $f_{\partial \sigma}$ to be continuous? The definition of the topology on $K^{(n+1)}$ does not require continuity of $f_{\partial \sigma}$. We have a disjoint union, and a quotient map, which defines the quotient topology. Also, the $f_{\sigma}$ seem to be continuous, even if $f_{\partial \sigma}$ is not. This would follow from the fact that $f_{\sigma}$ is equal to $\pi \circ i$, where $\pi$ is the quotient map and $i$ is the inclusion map on the disjoint union, and those maps are continuous. $f_{\sigma}$ being continuous implies $f_{\sigma}|_{S^{n}}$ being continuous, since it is just the restriction. But such restriction is commonly considered as the attaching map $f_{\partial \sigma}$ itself, which (by what I said above) need not be continuous for such definitions to make sense. The problem at hand, it seems, is that the induced topology on $K^{(n)}$ as a subspace is not the same as the topology of $K^{(n)}$ itself, and maybe this is why we require the attaching maps to be continuous (in order for restriction to preserve the expected topologies), but I'm not sure.
My question, summing up the issues, is:
What is the relevance of the attaching map being continuous? If it is not, what kind of issues arise?
REPLY [4 votes]: The problem at hand, it seems, is that the induced topology on $K^{(n)}$ as a subspace is not the same as the topology of $K^{(n)}$ itself, and maybe this is why we require the attaching maps to be continuous (in order for restriction to preserve the expected topologies), but I'm not sure.
Yes, this is exactly right. If the attaching maps were not continuous, then when you formed $K^{(n+1)}$, you would be modifying the topology of $K^{(n)}$ to make them continuous. You can get all sorts of terrible spaces if you don't require this.
For instance, suppose you decided to attach a 2-cell to $S^1$ via some horrible discontinuous map $f:S^1\to S^1$ which is surjective when restricted to every nontrivial interval of the domain. In the resulting space, this map $f$ becomes continuous. But the only ordinary open subsets $U\subseteq S^1$ such that $f^{-1}(U)$ is open are $U=\emptyset$ and $U=S^1$ (since the inverse image of any point under $f$ is dense!). This means that in the quotient space, the subspace $S^1$ now has the indiscrete topology! In particular, the quotient is not Hausdorff.
An enormous amount of the utility of CW-complexes comes from the fact that they can be understood inductively using their skeleta. For instance, cellular (co)homology (and more generally the Atiyah-Hirzebruch spectral sequence) comes from considering a CW-complex as being filtered by its skeleta. But this only works when you give the skeleta their subspace topologies, and those subspace topologies are much more difficult to understand if they do not coincide with the topologies of the skeleta as CW-complexes themselves. (For instance, using the subspace topologies, it would no longer necessarily be the case that the quotient $K^{(n)}/K^{(n-1)}$ is a wedge of $n$-spheres--in the example above, $K^{(1)}$ is a circle but its subspace topology is indiscrete!)<|endoftext|>
TITLE: Does $X^{2/2} = X^{1/1}$?
QUESTION [6 upvotes]: I'm having a bit of a hard time wrapping my head around how the following that I have just learned:
$\sqrt{X^2} = |X|$, and I totally understand why.
But, when expressed as an exponent, doesn't this really just mean the following:
$X^{2/2} = |X|$, if this is the case, and I simplify the rational exponent, I would get:
$X^{1/1}$ or $X^1$, which does not equal $|X|$.
Also, if I apply the following rule of a radical function:
$\sqrt[n]{P^Q} = (\sqrt[n]P)^Q$ where n is the index of the root and $Q$ is the power of the radicand, then this should mean that:
$\sqrt{X^2} = (\sqrt{X})^2$, but the $(\sqrt{X})^2$ does not equal $|X|$ and has a domain where $X > 0$, while the $\sqrt{X^2}$ has a domain equal to all real values for $X$.
Does this mean that when $X$ is raised to an even-numbered power and is the radicand in a radical expression, that one should not simplify the rational exponent or one should not rewrite the radical expression such that the power of the radicand $X$ now lies outside of the root function?
Any replies will be greatly appreciated.
REPLY [3 votes]: This is due to the fact that there is a slight difference between $\sqrt{x}$ and $x^{1/2}$
I recommend looking up the term "Principle Root", with a basic introduction here.
In essence, for positive numbers there are always two answers to $x^{1/2}$, namely $\pm \sqrt{x}$.
From that, note that the $\sqrt{x}$ function always gives the positive root for a positive argument, and is thus a true function... it outputs one root for each argument $x$. However, the function $x^{1/2}$ outputs two values for each argument $x$, and is thus NOT a true function. Nuances like these are what is messing with your argument!<|endoftext|>
TITLE: What is the difference between intrinsic and extrinsic curvature?
QUESTION [10 upvotes]: In general relativity, energy bends spacetime. However, this doesn't mean that a fifth dimension for spacetime to "bend into" exists." That is, spacetime isn't embedded in a higher dimensional space, Instead, the curvature is said to be intrinsic.
But what does that mean? One could imagine the sphere on a ball as an example of extrinsic curvature it seems that intrinsic curvature isn't as straight-forward and intuitive. Is there a simple and easy way to understand the differences, and how a space can "curve" without actually be embedded in a higher dimensional space to "bend" in?
REPLY [7 votes]: A Riemannian manifold is intrinsically curved if there exists a geodesic triangle bounding a topological disk whose interior angles do not add to $\pi$.
An embedded Riemannian submanifold $M \subset N$ is extrinsically curved if no smooth orthonormal frame of normal vectors of $M$ is parallel (convariantly constant) in $N$.
A flat square torus embeds isometrically as a product of circles in four-dimensional space; the image is intrinsically flat (every triangle bounding a disk has total interior angle $\pi$) but extrinsically curved (there is no parallel field of unit normal vectors, much less a parallel frame).
A great sphere in a round $3$-sphere is intrinsically curved (isometric to the unit sphere in Euclidean $3$-space, so a triangle bounding a disk has total interior angle $> \pi$) but extrinsically flat (each of the two smooth unit normal fields is parallel). If you don't mind a non-compact example, a circular cylinder in Euclidean $3$-space is intrinsically flat but extrinsically curved for similar reasons.
Gravitational lensing may be viewed as a physical manifestation of intrinsic curvature: In an approximation where the lensing system is static, the path of a light ray is a spatial geodesic. Two light rays seen at distinct points of the sky form a "digon", a geodesic polygon with two sides. Because the sum of the interior angles is not zero, space is intrinsically curved.<|endoftext|>
TITLE: Is $0! = 1$ because there is only one way to do nothing?
QUESTION [90 upvotes]: The proof for $0!=1$ was already asked at here. My question, yet, is a bit apart from the original question. I'm asking whether actually $0!=1$ is true because there is only one way to do nothing or just because of the way it's defined.
REPLY [22 votes]: I'm going to go a bit against the grain and say that this isn't a very great way of thinking about this. Having a combinatorial intuition for functions that have combinatorial definitions is great, but that intuition often just doesn't work for vacuous cases where you plug in 0 for one of the values, because getting the right answer really requires thinking carefully about the formal definition with sets.
Say you have three numbers, 1, 2, 3, and you look at all the different ways you can write them down
1, 2, 3
1, 3, 2
2, 1, 3
2, 3, 1
3, 1, 2
3, 2, 1
Now if you count those, you see six lines. In general, if you do this for $n$ numbers, you will see $n!$ lines. But what about 0 numbers? Here's all the ways you can write 0 numbers:
Seems weird. I don't know about you, but I see 0 lines. But based on what everyone is saying, there should be one line there. What's going on? Is there really one way to count nothing?
The problem is that, for this intuitive definition to work right, there has to be one "empty" line. There aren't empty lines in the other cases, though, just when you have 0 numbers. Confused? Let's look at this just a little differently. Let's do the same thing again, but this time, when we write the numbers, let's write parentheses around them:
(1, 2, 3)
(1, 3, 2)
(2, 1, 3)
(2, 3, 1)
(3, 1, 2)
(3, 2, 1)
Now instead of counting lines, we count the number of parenthesized items of numbers. These are called $n$-tuples, or just tuples. $n!$ is the number of $n$-tuples, the number of tuples of length $n$. Formally, it is the size of the set of all $n$-tuples.
Now, let's do 0. How many tuples of length 0 are there? Here's one:
()
Based on our definition of a tuple (parenthesized ordered list of numbers), that counts. It should be pretty obvious that that is the only one. So 0! is 1.
At the end of the day, you have to define functions like $n!$ in a set theoretic way. Then the values of corner cases like $0!$ fall out of those definitions, typically by something called vacuous truth ("vacuous" is a word I used above, not by accident).<|endoftext|>
TITLE: Find Taylor series polynomial that gives uniform bound on error
QUESTION [5 upvotes]: The problem comes in two parts:
Find an $\epsilon > 0$ such that for every $x\in[0,1]$ $$\left\lvert \sqrt{x}-\sqrt{x+\epsilon}\right\rvert \le \frac{1}{200}$$
We can show that $\left\lvert \sqrt{x}-\sqrt{x+\epsilon}\right\rvert $ decreases as $x$ increases to 1, so it's enough to consider the case where $x=0$ and we see that we can use $\epsilon = 1/200^2$.
(The part I didn't get or at least am not satisfied with.) Find a polynomial $P(x)$ such that for every $x\in[0,1]$ $$\left\lvert \sqrt{x}-P(x)\right\rvert \le \frac{1}{100}$$
Hint: Use the power series expansion of $\sqrt{x+\epsilon}$ around $x=1$.
Here's my work so far
First note that $$\left\lvert \sqrt{x}-P(x)\right\rvert \le \left\lvert \sqrt{x}-\sqrt{x+\epsilon}\right\rvert + \left\lvert \sqrt{x+\epsilon} - P(x)\right\rvert $$
If we can find a P(x) such that $\left\lvert \sqrt{x+\epsilon} - P(x)\right\rvert \le 1/200$, then we can use part 1 and be done. So then I tried using the hint:
$$\sqrt{x+\epsilon} = \sqrt{1+\epsilon} + \frac{1}{2}(1+\epsilon)^{-1/2}(x-1)- \frac{1}{8}(1+\epsilon)^{-3/2}(x-1)^2 + \ldots$$
The tricky part here is that we can't get a nice uniform bound on $f^{(k)}(\xi), \xi\in[0,1]$, where $f(x) = \sqrt{x+\epsilon}$, because around $x=0$ it blows up. Note for the remainder term we have
$$\left\lvert \frac{f^{k}(\xi)}{k!}(x-1)^k\right\rvert \le \left\lvert \frac{f^{k}(\xi)}{k!}\right\rvert $$
In other words, the remainder is largest in abs. value when $x=0$, so we can just focus on the Taylor series expansion after substituting $x=0$. The expansion becomes
$$\sqrt{1+\epsilon} - \frac{1}{2}(1+\epsilon)^{-1/2} - \frac{1}{8}(1+\epsilon)^{-3/2} - \ldots$$
Note that $$\frac{1}{(1+\epsilon)^{(2k-1)/2}}\le 1$$
so
$$\sqrt{1+\epsilon} - \frac{1}{2}(1+\epsilon)^{-1/2} - \frac{1}{8}(1+\epsilon)^{-3/2} - \ldots \ge \sqrt{1+\epsilon} - \frac{1}{2}-\frac{1}{8} - \ldots$$
and the coefficients are monotonically decreasing. The form of the coefficients is $$a_k=\frac{1}{2^k k!}\prod_{n=1}^{k-1} (2n-1)$$ Recall that we should be converging to 0, so each new term we add moves us closer to 0.
That's as far as I've gotten. I don't know if that was the right route, and also how to proceed from here. Maybe I've just stared at this for too long, but either way I'd like to get some feedback. Maybe there's a better way to go about this.
Thanks!
Update
I tried out my solution with Mathematica, and it seems to work. The problem is that I had to expand the Taylor series to ~12800th order. This question should be answerable without Mathematica (it is taken from an old exam).
REPLY [2 votes]: Take any $k\geq \sqrt 2$ and $e= k^{-2}.\;$ For $x\in [0,1] $ let $y= x+e-1.$ Then $$\sqrt {x+e} =\sqrt {1+y} .$$ And for all $x\in [0,1]$ we have $$|\sqrt x-\sqrt{1+y}|=|\sqrt x -\sqrt {x+e}|\leq 1/k.$$ $$\text {Now we have }\quad e-1\leq y\leq e$$ $$\text {so }\quad |y|\leq 1-e<1 \text { because } 0
TITLE: Real Numbers Raised to Imaginary Powers?
QUESTION [12 upvotes]: What is a real number to the power of an imaginary or complex number? e.g. 3i. I have searched through sites about imaginary numbers, but none seem to say anything about imaginary indices. Examples and explanations would be appreciated.
REPLY [8 votes]: Using euler's formula :
$$c^{a+bi}=c^ac^{bi}=c^ae^{bi \ln (c)}$$
$$=c^a \left((\cos (b \ln (c))+ i \sin(b \ln (c) \right))$$<|endoftext|>
TITLE: Are there ways to solve equations with multiple variables?
QUESTION [8 upvotes]: I am not at a high level in math, so I have a simple question a simple Google search cannot answer, and the other Stack Exchange questions does not either. I thought about this question after reading a creative math book. Here is the question I was doing, which I used the solutions manual in shame(Not exact wording, but same idea):
The question in the blockquotes below is not the question I am asking for answers to. Some misunderstood what I am asking. What I am asking is in the last single-sentence paragraph.
Suppose $a_2,a_3,a_4,a_5,a_6,a_7$ are integers, where $0\le a_i< i$.
$\frac 57 = \frac {a_2}{2!}+\frac {a_3}{3!}+\frac {a_4}{4!}+\frac {a_5}{5!}+\frac {a_6}{6!}+\frac {a_7}{7!}$
Find $a_2+a_3+a_4+a_5+a_6+a_7$.
The solution to this particular question requires that $a_7$ and the other variables in later steps of the algebra process to be remainders when both sides are to be divided by an integer. I am now wondering what if an equation ever comes up where the method to solve for the question above cannot work due to the variables not being able to return remainders. Thus, my question is whether it is possible to solve algebraic equations with more than two variables and most variables having constant coefficients, is not a system of equations, and the variables are assumed to be integers, and the solution is unique.
Does such a way to solve such equations in general exist? If so, please explain. What is this part of math called?
Thank you.
REPLY [2 votes]: When you cannot solve a diophantine equation
The MRDP theorem shows that there is no systematic method to determine whether any given diophantine equation has a solution. So if by "solve" you include figuring out whether there is no solution, then it is absolutely impossible in general. Even if you are guaranteed that it has finitely many solutions, you still cannot systematically find all of them. The reason is that otherwise we can use such an oracle $D$ to solve the halting problem as follows.
Let $H$ be the following program on input $(P,x)$:
Output the following program $Q$ with input $n$:
If $n = 0$ then:
Run $P(x)$.
Accept.
Reject.
Then clearly $H(P,x)$ is a program that accepts a finite RE (recursively-enumerable) set. By MRDP we can computably convert it to a diophantine equation with finitely many solutions. Now all we have to do is to ask $D$ to find all solutions, and accept $(P,x)$ iff $D$ finds one solution.
Now since $D$ is supposed to always halt, it will find a solution iff $P$ halts on $x$, and hence we have computably solved the halting problem. But that is impossible and therefore $D$ cannot exist.
In summary, even if you guarantee that a diophantine equation has at most a single solution, there is in general no systematic procedure to find if there is a solution, not to say find the solution!
When you can solve a diophantine equation
For a specific diophantine equation, one may be able to find and prove the solutions via necessarily ad-hoc methods. The above shows that we cannot do better than ad-hoc methods, although each ad-hoc method may solve a large class of diophantine equations. For example, congruence mod $n$ for some fixed $n$ can easily show the non-existence of solutions for a large class of equations. It turns out that there are equations for which there are no solutions but such that congruences will never be able to rule out all possibilities.
Another situation in which we can solve a diophantine equation is if we are told that there are at least $k$ solutions, and required to find only $k$ of them. In this case we can iterate through all possibilities until we have found $k$ solutions.
Finally of course there is the trivial case where we are told to find all solutions within some finite set.<|endoftext|>
TITLE: Solutions to $a_1+2a_2+\cdots+ka_k = 1979$
QUESTION [5 upvotes]: For $k = 1,2,\ldots$ consider the $k$-tuples $(a_1,a_2,\ldots,a_k)$ of positive integers such that $$a_1+2a_2+\cdots+ka_k = 1979.$$ Show that there are as many such $k$-tuples with odd $k$ as there are with even $k$.
We know that we need only check up to $k = 62$ because otherwise it is just $0$. Since I don't see an easy way of counting solutions to $a_1+2a_2+3a_3 = 1979$ for example, is there something else we could do to prove the question?
Also it is interesting to note that $1979$ is a prime. Although the result doesn't seem easily generalizable if we replace $1979$ by some other integer.
I tried the following.
Attempt:
We first see that the number of solutions for some $k$ is given by the coefficient of $x^{1979}$ in the expansion of $$(x+x^2+\cdots)(x^2+x^4+\cdots)\cdots(x^{k}+x^{2k}+\cdots) = \prod_{i=1}^k\dfrac{x^i}{1-x^{i}}.$$ Thus, we need to show that $$Q(x) = \sum_{k=1; 2 \mid k}^{62}\left(\prod_{i=1}^k\dfrac{x^i}{1-x^{i}}\right)-\sum_{k = 1; 2 \nmid k}^{61}\left(\prod_{i=1}^k\dfrac{x^i}{1-x^{i}}\right)$$ doesn't contain the term $x^{1979}$ in its expansion.
REPLY [4 votes]: Let $(a_1, \cdots , a_k)$ be a solution to the desired equation. Note that this describes a partition of $1979$ given as
$$\lambda = (a_1 \times 1, a_2\times 2, a_3\times 3, \cdots , a_k\times k),$$
i.e. each positive solution describes a unique partition of $1979$ with largest part $k$. Turning the Young diagram of the partition sideways, this translates to a partition with exactly $k$ parts. In fact, you can check that the stipulation that $a_i > 0$ translates to the fact that we get a partition with not only exactly $k$ parts but $k$ distinct parts.
Therefore we have a bijection with solutions $k$-tuples solutions and partitions with exactly $k$ distinct parts. Out problem is therefore reduced to proving that the number of partitions of $1979$ into an even number of distinct parts is exactly equal to the number of partitions of $1979$ into an odd number of distinct parts.
This was a problem tackled by Euler, whose solution is encapsulated in his pentagonal number theorem.
Euler's Pentagonal Number Theorem: Let $P_{DE}(n)$ and $P_{DO}(n)$ denote the number of partitions of $n$ with an even/odd number of distinct parts respectively. Then
$$P_{DE}(n) - P_{DO}(n) = \begin{cases}(-1)^k & n=\frac{3k^2-k}{2}\\ 0 & \text{otherwise}\end{cases}.$$
In particular, the number of distinct even and odd partitions are equal except when $n$ is a generalized pentagonal number.
The proof of this theorem is not very difficult and is available in many places online so I won't prove it here.
The result now follows by noting that $1979$ is not a generalized pentagonal number (which it is not).<|endoftext|>
TITLE: Significance of symmetric characteristic polynomials?
QUESTION [6 upvotes]: By symmetric characteristic polynomial, I mean for example... the characteristic polynomial of the $3\times3$ identity matrix is:
$x^3 - 3x^2 + 3x - 1$
similarly for the $4\times4$ identity matrix it is:
$x^4 - 4x^3 + 6x^2 - 4x + 1$
The absolute values of the coefficients are symmetric about the center.
Is there some general property of matrices that leads to this symmetry in the characteristic polynomial in cases other than the identity matrices or a scalar times identity? For example, is there something interesting we can say about a $4\times4$ matrix that has this characteristic polynomial?
$x^4 - 14x^3 + 26x^2 - 14x + 1$
REPLY [6 votes]: Those are examples of palindromic or anti-palindromic polynomials (I will call them pal and a-pal respectively [and (a)pal, for one that is either] in this answer). The identity matrix of order $n$ has characteristic polynomial $(x - 1)^n$, whose expansion has binomial coefficients, which are symmetric. This makes it obvious why its characteristic polynomial is (a)pal.
Now more generally, a polynomial is (a)pal iff all its roots are multiplicatively symmetric about $1$. That is, for each root $\lambda$ of a polynomial that is (a)pal, $\dfrac 1 \lambda$ is also a root.
One obvious and simple implication of this is that zero can never be a root, and any matrix $A$ with (a)pal characteristic polynomial is therefore non-singular. Since the eigenvalues of the inverse matrix $A^{-1}$ are exactly the reciprocals of the eigenvalues of $A$, this also implies that $A$ and $A^{-1}$ have the same spectrum.
We can do better than this.
Theorem
A square matrix with entries from an algebraically closed field has (a)pal characteristic polynomial if and only if it is similar to its inverse.
Proof: Let $A$ be a matrix with (a)pal characteristic polynomial, and let $$J = P^{-1}AP = \operatorname{diag}(J_1, \ldots, J_p)$$
be its Jordan normal form. Each block $J_k$ is of the form
\begin{equation*}
J_k = \begin{bmatrix}
\lambda_k & 1 & 0 & \cdots & 0\\
0 & \lambda_k & 1 & \cdots & 0\\
0 & 0 & \lambda_k & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 & \cdots & \lambda_k
\end{bmatrix}.
\end{equation*}
We know that $A^{-1}$ has Jordan normal form $J^{-1} = P^{-1}A^{-1}P$, where $J^{-1} = \operatorname{diag}(J_1^{-1}, \ldots, J_p^{-1})$, and each block $J_k^{-1}$ is of the form
\begin{equation*}
J_k^{-1} = \begin{bmatrix}
\frac 1 {\lambda_k} & 1 & 0 & \cdots & 0\\
0 & \frac 1 {\lambda_k} & 1 & \cdots & 0\\
0 & 0 & \frac 1 {\lambda_k} & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 & \cdots & \frac 1 {\lambda_k}
\end{bmatrix},
\end{equation*}
and has the same size as $J_k$. However, as the characteristic polynomial of $A$ is (a)pal, $J_k^{-1} = J_l$, for some $l$ (and hence $J_l^{-1} = J_k$). Thus, $J$ and $J^{-1}$ differ only by a rearrangement of the blocks $J_1, \ldots, J_p$, and are therefore similar. Specifically, there exists a permutation matrix $Q$ such that $J^{-1} = QJQ^{-1}$ (see note below). Then
\begin{equation*}
A^{-1} = PJ^{-1}P^{-1} = P(QJQ^{-1})P^{-1} = (PQP^{-1})(PJP^{-1})(PQP^{-1})^{-1} = RAR^{-1},
\end{equation*}
where $R = PQP^{-1}$. Thus, if $A$ has (a)pal characteristic polynomial, it is similar to its inverse.
The converse follows immediately by observing that similar matrices have the same characteristic polynomial. $\qquad \square$
Note: The permuting matrix $Q$ for transforming $J$ to $J^{-1}$ can be obtained by writing the identity matrix $I$ of the same order as $J$ in the block diagonal form $I = \operatorname{diag}(I_1, \ldots, I_p)$, where $I_k$ is the identity matrix of the same order as $J_k$, and then applying the same permutation on these respective blocks as applied on the blocks of $J$ to obtain $J^{-1}$. That is, if $$J^{-1} = \operatorname{diag}(J_{\sigma(1)}, \ldots, J_{\sigma(p)}),$$ where $\sigma$ denotes the permutation applied to the blocks, then $$Q = \operatorname{diag}(I_{\sigma(1)}, \ldots, I_{\sigma(p)}).$$<|endoftext|>
TITLE: How to represent matrix multiplication in tensor algebra?
QUESTION [6 upvotes]: How can we represent matrix multiplication in tensor algebra?
Even if we assume all matrices represent contravariant tensors only, clearly matrix multiplication does not correspond to the multiplication operation of the tensor algebra (the tensor product), since the former is grade-preserving or grade-reducing, whereas the latter is always grade-increasing.
And then if we allow matrices to represent either contravariant or covariant or mixed variance tensors, then things get even more confusing.
For instance, a quadratic form then can be represented by the same matrix as the bilinear form it generates via polarization.
Seemingly we must implicitly be using the universal property relating $V \otimes V$ (tensor product) and $V \times V$ (Cartesian product). But we can define the same type of (matrix) multiplication for $V \otimes V^*, V^* \otimes V,$ or $V^* \otimes V^*$ or between elements of any two.
Thus now even the claim that matrices represent linear transformations and that matrix multiplication is the composition of linear maps seems suspect to me.
Is this just a result of the fact that linear algebra was invented before multilinear algebra/tensor analysis, and thus people were abusing notation when using matrices without realizing it but then the convention stuck? Or is there something more to this which I am missing?
Related but more abstract and slightly different question: How do we describe standard matrix multiplication using tensor products?
Relevant wikipedia articles: https://en.wikipedia.org/wiki/Outer_product#Tensor_multiplication, https://en.wikipedia.org/wiki/Kronecker_product
REPLY [4 votes]: Fix a basis $\{e_1, \ldots, e_n\}$ of $V$, and consider the dual basis $\{f_1, \ldots, f_n \}$ of $V^\ast$. Then we have a basis
$$\{e_1\otimes f_1,\ldots, e_i \otimes f_j, \ldots, e_n \otimes f_n\}$$
for $V \otimes V^\ast$, and the matrix
$$A = (a_{ij})$$
is just a way of representing the element
$$\sum_{i=1}^n \sum_{j=1}^n a_{ij} \; e_i \otimes f_j \in V \otimes V^\ast.$$
Of course an element of $V \otimes V^\ast$ gives a linear map $V \to V$ by
$$(w \otimes f)(v) := f(v) w$$
and extending by linearity. Given two such elements, we can compose the corresponding functions:
$$(w' \otimes f')(w \otimes f)(v) = (w' \otimes f')(f(v) w) = f(v) f'(w) w' = f'(w) \; (w' \otimes f)(v)$$
so composition of linear maps is given by
$$(w' \otimes f') \circ (w \otimes f) = f'(w) \; (w' \otimes f)$$
extended by linearity. If you write your elements in the $e_i \otimes f_j$ basis and apply this operation to them, you'll see that the usual definition of matrix multiplication pops right out.
Of course all the calculations with explicit tensors above can be rephrased in terms of the universal property of the tensor product if you like.
This is all assuming you want the matrix to represent an element of $V \otimes V^\ast$ rather than an element of $V \otimes V$ or $V^\ast \otimes V^\ast$. But you can work out what should happen in cases like that the same way.<|endoftext|>
TITLE: Galois group of $x^5-5x+10$
QUESTION [8 upvotes]: I was illustrating the theorem on solvability by radicals through some examples of degree $5$ polynomials. One I chose was $x^5-5x+10$. I was (perhaps wrongly) going to prove that the Galos group is $S_5$, with expectation that it has exactly three real roots.
However, it has exactly one real root (which I saw by looking graph of this polynomial in $\mathbb{R}^2$ using some online software). Then I get stucked, and couldn't completely find the Galois group of this polynomial.
How do we proceed to determine the Galois group of this polynomial over $\mathbb{Q}$?
REPLY [6 votes]: Observing discriminant of polynomial is very helpful to find its Galois group. It is known that if $D$ is discriminant of polynomial $f(x)$, then Galois group $G=G_{f}$ of $f(x)$ is contained in $A_{5}$ iff $D\in \mathbb{Q}^{2}$. (This holds for general fields with $char\neq 2$, not only $\mathbb{Q}$. Proof is not hard and you can find proof in "Abstract Algebra", Chap 14, Proposition 33 of Dummit-Foote.) As Wolfram says, its discriminant is 30450000 which is not a square. So $G$ is not contained in $A_{5}$.
Also, by Eisenstein criterion with $p=5$, $f(x)$ is irreducible and $[\mathbb{Q}(\alpha):\mathbb{Q}]=5$ for any $f(\alpha)=0$. So order of $|G|$ has to be divided by $5$.
Then if you see here, the only candidates are $S_{5}$ and general Affine group $GA(1,5)$. If we reduce the polynomial modulo $3$, then $\overline{f}(x)=x^{5}+x+1=(x-1)^{2}(x^{3}-x^{2}+1)$ in $\mathbb{F}_{3}[x]$. Since $x^{3}-x^{2}+1$ is irreducible over $\mathbb{F}_{3}$, order of $G_{\overline{f}}$ has to be divided by $3$. Since $G_{\overline{f}}\leq G_{f}$ (this is non-trivial result, and see here for the Tate's proof), $|G_{f}|$ is also divided by $3$ and contains a $3$-cycle. We can show that $5$-cycle and $3$-cycle in $S_{5}$ generates $A_{5}$, so the answer is $G_{f}=S_{5}$.
I think it will be possible to find a prime $p$ s.t. $\overline{f}(x)\in\mathbb{F}_{p}[x]$ has an irreducible factor of degree $2$, and then you don't have to calculate discriminant of $f$.<|endoftext|>
TITLE: Theorem 2.22 from RCA Rudin
QUESTION [6 upvotes]: I read this interesting result from Rudin's book but I would like to clarify some confusing moments.
As I understood $(\mathbb{R}^1, +)$ is group and $(\mathbb{Q}, +)$ is subgroup.
He considers cosets $E_r=r+\mathbb{Q}$ for $r\in \mathbb{R}$. Since $E_r\neq \varnothing$ then by axiom of choice we can costruct set $E$ which contains only one element from each $E_r$.
Question 1: I am not sure that $(E+r)\cap (E+s)=\varnothing$ for $r,s\in \mathbb{Q}$, $r\neq s$. Let's consider cosets $\sqrt{2}+\mathbb{Q}$ and $1+\sqrt{2}+\mathbb{Q}$ and let $E$ contains $\sqrt{2}$ and $\sqrt{2}+1$ from the first and second cosets, respectively. But $E+1$ also contains $\sqrt{2}+1$. So we see that $E\cap (E+1)\neq \varnothing$.
Question 2: Suppose that I have mistake in my previous question. Let $y$ and $z$ lie in the same coset of $Q$? Where's the contradiction?
Question 3: Why such $y$ exists? I am about point $(b)$.
Would be very thankful for help!
REPLY [4 votes]: Question 1: The cosets $\sqrt{2}+\mathbb{Q}$ and $1+\sqrt{2}+\mathbb{Q}$ are actually the same. If $x=\sqrt{2}+q\in\sqrt{2}+\mathbb{Q}$, then $q-1$ is also rational, so $x=1+\sqrt{2}+(q-1)$ is also in $1+\sqrt{2}+\mathbb{Q}$, and similarly conversely.
Question 2: By assumption, $E$ contains exactly one point from each coset. So if $y,z\in E$ are two distinct points which are in the same coset, this is a contradiction.
Question 3: Again, by assumption, $E$ contains exactly one point from each coset. In particular, it contains exactly one point from the coset $x+\mathbb{Q}$, and this is the point we call $y$.<|endoftext|>
TITLE: A discrete topological space is a space where all singletons are open $\implies$ all sets are clopen? Closed?
QUESTION [6 upvotes]: I know that a discrete topological space is where all singletons are open.
For example, $\mathbb{N}$ with the subspace topology inherited from $(\mathbb{R}, \mathfrak{T}_{usual})$. This is the case because we can find $\{n\} = (a,b) \cap \mathbb{N}$ which is open. Hence all singletons are open.
But are all sets are clopen? Closed?
My thoughts: Suppose we take a singleton $\{x\}$ in a discrete space $X$, we know singleton is open, hence $\{x\}^c$ is closed. But it is the arbitrary union of singletons, so it is also open, so all sets are clopen.
REPLY [2 votes]: Let $F$ be an arbitrary subset of $X$ where $X$ is equipped with discrete topology.
As you said: in a discrete topological space all singletons are open.
As you said: arbitrary unions of singletons are open so $F^c=\bigcup_{x\in F^c}\{x\}$ is an open set.
(You don't even need this subroute: in a discrete space all sets are open by definition)
Then its complement $F$ is a closed set.<|endoftext|>
TITLE: Covariant derivative fullfils Levi-Civita in Euclidean space
QUESTION [5 upvotes]: $\newcommand{\Reals}{\mathbf{R}}$In our lecture, when we introduced the Levi-Civita connection, we had as an example the directional derivative of a vector field $X$ in direction of another vectorfield $Y$ in $\Reals^n$ defined by
$$
D_XY(p) := \lim_{t \to 0} \frac{Y(p + tX(p))-Y(p)}{t}.
$$
We have written down that this definition fullfils the Levi-Civita connection definition, but actually I don't even see why it is a connection. For example why does $D_{f \cdot X}Y = f \cdot D_XY$ for an arbitrary $f \in C^{\infty}(\Reals^n)$ hold?
My ideas: Interpreting p as a tangent vector one could use the $\Reals$-linearity of the vectorfield:
\begin{align*}
D_{f \cdot X}Y(p)
&= \lim_{t \to 0} \frac{Y(p + t(f \cdot X)(p))-Y(p)}{t} \\
&= \lim_{t \to 0} \frac{Y(p + tf(p) \cdot X(p))-Y(p)}{t} \\
&= \lim_{t \to 0} \frac{Y(p) + tf(p)Y(X(p))-Y(p)}{t} \\
&= f(p) \cdot \lim_{t \to 0} \frac{tY(X(p))}{t} \\
&= f(p) \cdot \lim_{t \to 0} \frac{Y(p + tX(p)) - Y(p)}{t} \\
&= f(p) D_X Y.
\end{align*}
Is that correct? Thank you a lot!
REPLY [4 votes]: The point is that $f(p)$ is just a number, and the derivative depends only on $f(p)$, not on values of $f$ at other points. If $f(p) = 0$, there's nothing to prove. Otherwise, $t f(p) \to 0$ when $t \to 0$, and
\begin{align*}
D_{f\cdot X} Y(p)
&= \lim_{t \to 0} \frac{Y\bigl(p + t(f \cdot X)(p)\bigr) - Y(p)}{t} \\
&= \lim_{t \to 0} \frac{Y\bigl(p + tf(p)X(p)\bigr) - Y(p)}{t} \\
&= f(p)\lim_{t \to 0} \frac{Y\bigl(p + tf(p)X(p)\bigr) - Y(p)}{tf(p)} \\
&= f(p) \lim_{t \to 0} \frac{Y\bigl(p + tX(p)\bigr) - Y(p)}{t} \\
&= f(p) D_{X} Y.
\end{align*}<|endoftext|>
TITLE: Evaluating $\int _{-100}^{100}\lfloor {x^3}\rfloor \,dx$
QUESTION [14 upvotes]: Is there an alternative better solution?
$I=\displaystyle\int_{-100}^{100}\lfloor x^3\rfloor\,dx$
$=\displaystyle\int_{-100}^{100}\lfloor(100-100-x)^3\rfloor\,dx$ $\quad$ [$\because\int_{a}^{b}f(x)\,dx=\int_{a}^{b}f(a+b-x)\,dx$]
$=\displaystyle\int_{-100}^{100}\lfloor-x^3\rfloor\,dx$
$=\displaystyle\int_{-100}^{100}(-\lfloor x^3\rfloor-1)\,dx$ $\quad$ [$\because \lfloor x\rfloor+\lfloor-x\rfloor=-1$ when $x\notin \mathbb{Z}$]
$\Rightarrow I=-I-200$ $\quad$ $\Rightarrow I=-100$
EDIT. Is there any area interpretation of the integral?
REPLY [8 votes]: For $0>x\not \in \Bbb Z$ we have $[-x]=-[x]-1.$ $$\text {So }\quad \int_{-100}^0[y^3]\;dy = \int_0^{100}(-[x^3]-1)\;dx$$ is obtained by letting $y=-x.$ $$\text { Therefore } \quad \int_{-100}^{100}[x^3]\;dx=\int_{-100}^0 [y^3]\;dy+\int_0^{100}[x^3]\;dx=$$ $$=\int_0^{100}(-[x^3]-1)\;dx+\int_0^{100} [x^3]\;dx=$$ $$=\int_0^{100} (-1-[x^3]+[x^3])\;dx=\int_0^{100}(-1)\;dx=-100.$$
Remark: (Added July 2019). We have $[-x] \ne -[x]-1$ if $0\ge x\in \Bbb Z.$ But the integrals in the 1st displayed line are still equal because $\{y\in [-100,0]: [y^3]\ne -[y^3]-1\}$ is finite.<|endoftext|>
TITLE: Eigenvalues of product of a matrix with a diagonal matrix
QUESTION [5 upvotes]: I have got a question and I would appreciate if one could help with it.
Assume $S$ is a diagonal matrix (with diagonal entries $s_1, s_2, \cdots$) and $M$ is a positive symmetric matrix with eigen value decomposition as follows:
$$\mathrm{eig}(M) = ULU^T$$
where $U^T$ means the transpose of $U$. I am trying to find out about the eigenvalues of $SM$. In other words, is there any relation between the eigenvalues of the matrix $S$, the matrix $M$ and their product $SM?$
any help/hint is appreciated
REPLY [4 votes]: There's no immediate relationship, no. The largest (in absolute value) eigenvalue of the product is no larger than the product of the largest eigenvalues, and the smallest eigenvalue of the product is no smaller then the product of the smallest.
Take any unit vector $u$ in the plane, and let $u'$ be 90 degrees counterclockwise from $u$. And let $Q$ be the matrix whose rows are $u$ and $u'$. Then for any diagonal matrix $D$, $Q^t D Q$ represents stretching along $u$ by $d_1$ and along $u'$ by $d_2$ (where these are the diagonal entries of $D$). Hold that thought.
For $u_1 = e_1$ and $(d_1, d_2) = (1, 4)$, for instance, we get a matrix $M$ that's just diagonal, with $1$ and $4$ on the diagonal. If $u$ is a unit vector in the $45$-degree direction, we get something else, but again with eigenvalues $1$ and $4$. For each possible angle $\theta$, let
$$
M_\theta = Q^t \begin{bmatrix} 1 & 0 \\ 0 & 4 \end{bmatrix}Q
$$
where $Q$ is the the matrix made from $u$ and $u'$, and $u$ is the vector
$$
\begin{bmatrix}
\cos u \\
\sin u
\end{bmatrix}.
$$
So $M_\theta$ has eigenvalues $1$ and $4$. Let
$$
S = \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix}
$$
Then for $\theta = 0$, we have $SM_\theta$ is a diagonal matrix with eigenvalues 2 and 12, the product of smallest and product of largest eigenvalues. But for $\theta = \pi/2$, the product has eigenvalues $8 = 2 \cdot 4$ and $3 = 1 \cdot 3$, the "middle" two products of the eigenvalues of the two original matrices. For intermediate values of $\theta$, you get other possible eigenvalues. This shows that the eigenvalues of the product can range all the way from the smallest possible (the product two smallest evals of the factors) to the largest possible.
Of course, the PRODUCT of the eigenvalues of $SM$ is the products of those of $S$ with those of $M$, by determinants. So that's another samll constraint on them.<|endoftext|>
TITLE: Extension, restriction, and coextension of scalars adjunctions in the case of noncommutative rings?
QUESTION [6 upvotes]: If we have a ring homomorphism $f: R \to S$, then we can put an $(R,S)$ or $(S, R)$ bimodule structure on $S$ and define the extension $f_!:= S \otimes_R (-) : R \text{Mod} \to S \text{Mod}$, restriction $f_*: S \text{Mod} \to R \text{Mod}$ and coextension $f_*:= \text{Hom}_{R}(S, -) : R \text{Mod} \to S \text{Mod}$ of scalars functors. These constructions do not depend at all on the commutativity of the rings $R$ and $S$, but in every source I've seen that mentions extension and coextension of scalars, it is assumed that the rings are commutative, such as in the nLab article.
Is there any reason that the rings are assumed to be commutative? Does the adjunction still hold in the noncommutative case? Are there any properties of these constructions that are "less nice" in the noncommutative case? Or is it simply for convenience?
Most of my algebra knowledge is self-taught, so I apologize if I'm missing something obvious.
REPLY [3 votes]: As mentioned by Qiaochu Yuan in the comments, one nice thing about the case where $f: C \to K$ is a morphism of commutative rings is that scalar extension $f_!$ is strong monoidal.
In particular, for $f: C \to K$ a morphism of commutative rings,
\begin{align}
f_!(V) \otimes_K f_!(W) =& (K \otimes_C V) \otimes_K (K \otimes_C W)\cong (V \otimes_C K) \otimes_K (K \otimes_C W) \\ \cong& V \otimes_C K \otimes_C W
\cong K \otimes_C V \otimes_C W = f_!(V \otimes_C W).
\end{align}
This is given as Proposition 3 in section II.5.1 of Bourbaki's Algebra I.
As a corollary, it follows that restriction of scalars is lax monoidal.<|endoftext|>
TITLE: understanding relevance of Lie vs topological groups
QUESTION [6 upvotes]: A silly easy to state question. When dealing with topological groups, I'm trying to understand more profoundly the advantages of having a Lie group structure against just a topological one. Can somebody please illustrate this more in depth?
I understand what it implies right away, say having topological against differentiable manifolds and so on, but I guess I could really use examples/good guidance in this specific case to grasp better the full implications of being a Lie group?
sorry if i'm being too vague here, any help is happily appreciated.
REPLY [7 votes]: One of the most important features of Lie groups is that they can be studied by means of their Lie algebra. Difficult questions about the global geometry of a Lie group sometime translate to simpler algebraic questions about its Lie algebra.
For example, compact connected Lie groups can be classified in terms of Dynkin diagrams, which is an algebraic object associated to the Lie algebra.
A topological group has a priori no Lie algebra.
REPLY [6 votes]: Naturally, Lie groups admit enough structure to support differential calculus and associated tools.
Conversely, being a topological group (rather than a Lie group) is not analogous to being a topological manifold (rather than a smooth manifold): "Most" topological groups aren't manifolds in any sense. For starters, think of the additive group of rationals or $p$-adic numbers, or the countable product of cyclic groups of order two with the product topology (whose total space is homeomorphic to the Cantor ternary set), or an irrational winding on a torus.<|endoftext|>
TITLE: Evaluate $\int_0^\infty \frac{dx}{x^2+2ax+b}$
QUESTION [5 upvotes]: For $a^2
TITLE: Extending morphisms between varieties
QUESTION [5 upvotes]: This is Exercise 5.8 from Gathmann's notes on Algebraic Geometry, and I'm having a bit of trouble for (a) and (b): page 41 of http://www.mathematik.uni-kl.de/~gathmann/class/alggeom-2014/main.pdf
(a) asks you to show that any morphism $f:\mathbb{A}^1\backslash \{0\}\rightarrow\mathbb{P}^1$ can be extended to a morphism from $\mathbb{A}^1$ to $\mathbb{P}^1$.
(b) asks you to show that not all morphisms $f:\mathbb{A}^2\backslash \{(0,0)\}\rightarrow\mathbb{P}^1$ can be extended to a morphism from $\mathbb{A}^2$ to $\mathbb{P}^1$.
I suspect for (b) that the morphism that takes $(x,y)$ to $(x:y)$ can't be extended, but I'm not exactly sure how to prove this. Also, is the fact that $\mathcal{O}_{\mathbb{A}^2}(\mathbb{A}^2\backslash \{(0,0)\})=\mathcal{O}_{\mathbb{A}^2}(\mathbb{A}^2)$ useful in any way?
I would prefer to have an answer that doesn't involves schemes or anything too advanced like that.
REPLY [5 votes]: Let $\phi: k^2 \setminus \{(0,0)\} \rightarrow \mathbb{P}^1$ be the map $\phi(x,y) = \overline{(x,y)}$. If you try to extend $\phi$ to a map $k^2 \rightarrow \mathbb{P}^1$, it won't even be continuous.
Suppose $\phi$ did extend to a function on $k^2$. Let $U_0 = \{ \overline{(x,y)} \in \mathbb{P}^1 : x \neq 0\}$, and similarly define $U_1$. Then $U_0, U_1$ form an open cover $\mathbb{P}^1$. Suppose that $\phi(0,0)$ lies in $U_0$. Then the preimage of $U_0$ is $$\{ (x,y) \in k^2 : x \neq 0 \} \cup \{(0,0) \}$$ whose complement in $k^2$ is $$E = \{ (0,y) : y \neq 0\}$$ Now $E$ should be closed in $k^2$. But notice $E$ is contained in $\{0\} \times k$, a closed set in $k^2$ which is homeomorphic to the affine line $k$ via $(0,y) \mapsto y$. The image of $E$ in $k$ should then be closed in $k$. But closed sets in $k$ are either finite sets or the whole space, contradiction.<|endoftext|>
TITLE: Most functions are measurable
QUESTION [19 upvotes]: My professor once said that if you did not use the axiom of choice to build a function $f : \mathbb{R}^n \to \mathbb{R}^m$, then it is Lebesgue measurable. To what extent this is true?
REPLY [13 votes]: As Henning said, formally speaking, this is not quite true. We can define sets, without using the axiom of choice, which we cannot prove that they are measurable.
For example, every universe of set theory has a subuniverse satisfying the axiom of choice, in a very canonical way, called $L$. We can look at a set of reals which is a Vitali set in $L$, or any other non-measurable set that lives inside $L$. The axiom of choice is not needed for defining this set, however under some assumptions, this set will be an actual Vitali set, and thus non-measurable; and under other assumption it might be a countable set and therefore measurable.
What your professor really meant to say, is that it is consistent that the axiom of choice fails, and every set is Lebesgue measurable. This was proved by Solovay in 1970. So in most cases if you just write a definition of a "reasonable" set, it is most likely measurable. But nonetheless, this is not formally correct. As far as analysis go, though, it is usually the case that "explicitly defined sets" are measurable.<|endoftext|>
TITLE: How is the study of wavelets not just a special case of Fourier analysis?
QUESTION [6 upvotes]: As far as I can tell, "wavelets" is just a neologism for certain "non-smooth" families of functions which constitute orthonormal bases/families for $L^2[0,1]$.
How is wavelet analysis anything new compared to the study of Fourier coefficients or Fourier series or the orthogonal decomposition of $L^2$ functions (i.e. in the most abstract possible function analytic sense, not in the sense of using specifically the orthonormal families of sines/cosines or complex exponentials)?
Wavelet transforms just seem like the Fourier transform using a different orthonormal family for $L^2$ besides the complex exponentials, but conceptually this isn't really an achievement. The complex exponentials are a convenient orthonormal family, but at the end of the day aren't they just an orthonormal family?
REPLY [2 votes]: Not to dredge up old posts but comparing wavelets and the Fourier transform is really apples and oranges. A much better comparison is that of the STFT and the various wavelet transforms. As @mathematician mentioned, the benefit of wavelets over STFT is that wavelet analysis allows for us to (somewhat) sidestep the Gabor limit via a more flexible tiling of the time-frequency plane.<|endoftext|>
TITLE: Does localization commute with taking radicals?
QUESTION [7 upvotes]: Let $A$ be a ring, $S\subset A$ a multiplicative set, and $I\subset A$ an ideal not intersecting $S$. For any ideal $J$, let $r(J)$ denote the radical of $J$. Is $S^{-1}r(I) = r(S^{-1}I)$?
Certainly $S^{-1}r(I)$ is generated by elements of the form $\frac{x}{s}$, where $x^n\in I$. This implies that $\left(\frac{x}{s}\right)^n = \frac{x^n}{s^n}\in S^{-1}I$, so $\frac{x}{s}\in r(S^{-1}I)$. This shows that $S^{-1}r(I)\subseteq r(S^{-1}I).$
The other direction seems less clear. Certainly $r(S^{-1}I)$ is generated by elements of the form $\frac{x}{s}$ where $\frac{x^n}{s^n}\in S^{-1}I$. It isn't clear to me that this implies that $\frac{x}{s}\in S^{-1}r(I)$.
REPLY [7 votes]: Let $\displaystyle\frac{x}{s}\in r(S^{-1}I)$, so $\displaystyle\frac{x^n}{s^n}\in S^{-1}I$ for some $n$ and therefore $\displaystyle\frac{x^n}{s^n}=\frac{i}{t}$ for some $i\in I, t\in S$
Then $ux^{n}t=us^{n}i\in I$ for some $u\in S$, so $(uxt)^n\in I\implies uxt\in r(I)$ and therefore
$\hspace {.25 in}\displaystyle \frac{x}{s}=\frac{uxt}{uts}\in S^{-1}(r(I))$<|endoftext|>
TITLE: How to find $\int \frac{x^2-1}{x^3\sqrt{2x^4-2x^2+1}} dx$
QUESTION [8 upvotes]: How to find ?$$\int \frac{x^2-1}{x^3\sqrt{2x^4-2x^2+1}} dx$$
I tried using the substitution $x^2=z$.But that did not help much.
REPLY [4 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,\mathrm{Li}_{#1}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
With Euler Sub$\ldots$
$$
x = {\root{\root{2} - \root{2}t^{2}} \over \root{2}\root{\root{2} + 2t}}
\quad\imp\quad
t \equiv \root{2x^{4} - 2x^{2} + 1} - \root{2}x^{2}
$$
the integral adopt a relatively simple form
\begin{align}
\int{x^{2} - 1 \over x^{3}\root{2x^{4} - 2x^{2} + 1}}\,\dd x & =
\int{t^{2} + 2\root{2}t + 1 \over \pars{t^{2} - 1}^{2}}\,\dd t
\\[3mm] & =
\int{\dd t \over t^{2} - 1}\,\dd t +
\root{2}\int{2t\,\dd t \over t^{2} - 1}\,\dd t +
2\int{\dd t \over \pars{t^{2} - 1}^{2}} = -\,{t + \root{2} \over t^{2} - 1}
\end{align}<|endoftext|>
TITLE: What is the probability that the center of a odd sided regular polygon lies inside a triangle formed by the vertices of the polygon?
QUESTION [5 upvotes]: Three distinct vertices are choose at random from the vertices of a given regular polygon of $2n+1$ sides. Each of those three vertices will determine a triangle. What is the probability that the center of the polygon lies inside the triangle determined by three vertices of the polygon?
Note: All vertices of a regular polygon lie on a common circle (the circumscribed circle), the center of this circle is the center of the polygon.
P.S: why is this question asked specially for a odd sided regular polygon, will the answer differ for a even sided regular polygon?
I have just found a difficult proof of a Much more general question (Probability of a fixed point in a convex region being inside a triangle formed by any three point form the convex region). I am not interested in such generality, can one give a much simpler proof this special case (the convex region being the odd-sided polygon and the fixed point the center of the polygon).
REPLY [6 votes]: Call the $(2n+1)$-gon $P$, and label an arbitrary vertex $A$. We assume without loss of generality that $A$ is one of the chosen vertices. Let the other two chosen vertices in clockwise order be $B$ and $C$, respectively, and suppose there are $k$ edges of $P$ contained within minor arc $BAC$. Because $\triangle ABC$ contains the center of $P$, we have $\angle BAC < 90^\circ$. This implies $k\cdot \frac{180}{2n+1} < 90$; hence $1\le k \le n$.
For a fixed $k$ between $1$ and $n$ inclusive, it is easy to verify that there are $k$ choices of $B$ and $C$ such that exactly $k$ edges of $P$ are contained within minor arc $BAC$ . This gives us a total of $\sum_{k=1}^n k = n(n+1)/2$ valid choices for $B$ and $C$. Because we have $\binom{2n}{2}$ ways to choose $B$ and $C$ from all the remaining vertices of $P$, the probability that a randomly chosen triangle contains the center of $P$ is $$ \frac{\frac{n(n+1)}{2}}{\binom{2n}{2}} = \frac{n+1}{2(2n-1)}. $$
Note: This probability implies there are a total of $\frac{n(n+1)(2n+1)}{6} = \sum_{k=1}^n k^2$ triangles that contain the center of $P$. Is there also a bijective proof that counts this directly?<|endoftext|>
TITLE: Fourier cosine transforms of Schwartz functions and the Fejer-Riesz theorem
QUESTION [5 upvotes]: This question spanned from a previous interesting one. Let $k$ be a real number greater than $2$ and
$$\varphi_k(\xi) = \int_{0}^{+\infty}\cos(\xi x) e^{-x^k}\,dx $$
the Fourier cosine transform of a function in the Schwartz space.
Is is possible to use the Fejér-Riesz theorem or some variation of it, to prove that $\varphi_k(\xi)<0$ for some $\xi\in\mathbb{R}^+$?
REPLY [3 votes]: One proof is given in the paper
Noam D. Elkies, Andrew M. Odlyzko, and Jason A. Rush: On the packing densities of superballs and other bodies Invent. Math. 105 (1991), 613-639.
Instead of Fejér-Riesz, we use the Fourier inversion formula directly.
Assume for contradiction that $\phi_k(\xi) \geq 0$ for all $\xi$.
Then for all $x$ we would have
$$
0 \leq \int_0^\infty \phi_k(\xi) \, (1 - \cos (\xi x))^2 \, dx
= \frac12 \int_0^\infty \phi_k(\xi) \, (3 - 4 \cos (\xi x) + \cos (2\xi x)) \, dx,
$$
which means
$$
3 - 4 e^{-x^k} + e^{-(2x)^k} \geq 0
$$
for all $x$. But for $x$ near $0$ we may write
$e^{-x^k} = 1 - \epsilon$ and
$e^{-(2x)^k} = (1 - \epsilon)^{2^k} = 1 - 2^k \epsilon + O(\epsilon^2)$,
so
$$
3 - 4 e^{-x^k} + e^{-(2x)^k}
= 3 - 4(1-\epsilon) + (1 - 2^k \epsilon + O(\epsilon^2))
= (4 - 2^k) \epsilon + O(\epsilon^2).
$$
Thus $3 - 4 e^{-x^k} + e^{-(2x)^k}$ becomes negative for small $x$
once $2^k > 4$, which is equivalent to $k=2$,
and we have our desired contradiction.<|endoftext|>
TITLE: Real Analysis question that affects how to think about the Dirac delta function.
QUESTION [19 upvotes]: Okay, here are the ingredients to this question.
Me: 60 years old. 39 years ago I took two semesters of Real Analysis using the Royden textbook. Rusty is an understatement. But I am still quite anal and OCD. I am also an electrical engineer, works in signal processing. DSP and Linear System Theory are important to me. I have also had two semesters (as a grad student) of Functional Analysis (using the Kreyszig text) and multiple courses in probability, random variables, and random processes (a.k.a. "stochastic processes").
Electrical Engineers (and I suspect many physicists) essentially treat $\delta(x)$ as a "function". But it isn't. One thing I remember from R.A. is that if
$$ f(x) = g(x) $$
almost everywhere in $E$, then
$$ \int_E f(x) \, dx \ = \ \int_E g(x) \, dx $$
problem is, of course, that electrical engineers (and their professors) like to think of
$$\begin{align} f(x) & = \delta(x) \\
g(x) & = 0 \end{align} $$
and that
$$ \int\limits_{-1}^{+1} f(x) \, dx = 1 \quad \ne \quad \int\limits_{-1}^{+1} g(x) \, dx = 0 $$
yet $f(x) = g(x)$ everywhere except at one single value of $x$ .
Now I have heard (or read) that the "Dirac delta function is not really a function but is a 'distribution' or a 'functional'." And I understand the meaning of the terms "distribution" in the context of random variables and "functional" in the context of metric spaces, normed spaces, etc. Is that the given usage of these two terms regarding $\delta(x)$?
Question 1: Is the usage of the notation
$$ \int\limits_{-\infty}^{+\infty} f(x) \, \delta(x) \, dx $$
a misnomer? There is no integration going on. It's just a linear functional that maps the function $f(x)$ to the number $f(0)$. Now EEs and maybe physicists will comfortably look at that as an integral that is the same as
$$ \int\limits_{-\infty}^{+\infty} f(0) \, \delta(x) \, dx = f(0)\int\limits_{-\infty}^{+\infty} \delta(x) \, dx = f(0) $$
But since $\delta(x)$ is not a function at all, what do mathematicians mean with that notation?
Question 2: How fatal is it for electrical engineers and physicists to consistently treat the Dirac delta function simply as a limit of "nascent deltas" such as
$$ \delta(x) \ \triangleq \ \lim_{\sigma \to 0^+} \frac{1}{ \sigma} \operatorname{rect} \left( \frac{x}{\sigma} \right) $$
where $ \operatorname{rect}(x) \triangleq
\begin{cases}
1 \quad |x|<\frac{1}{2} \\
\frac{1}{2} \quad |x|=\frac{1}{2} \\
0 \quad |x|>\frac{1}{2} \\
\end{cases} $
or
$$ \delta(x) \ \triangleq \ \lim_{\sigma \to 0^+} \frac{1}{\sigma} \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}\left(\frac{x}{\sigma}\right)^2} $$
What is gonna kill us to simple-mindedly treat the Dirac delta as such a function? A function that is zero almost everywhere, yet it integrates to be equal to 1 (where the function that is zero everywhere integrates to be 0).
If we do that, within our own disciplines, what mathematical problem might crop up that kills us?
This is not exactly the same but smells a lot like this concern from Richard Hamming:
“Does anyone believe that the difference between the Lebesgue and Riemann integrals can have physical significance, and that whether say, an airplane would or would not fly could depend on this difference? If such were claimed, I should not care to fly in that plane.”
I might ask the same question regarding the mathematician's and the engineer's understanding of the Dirac delta function. How might a mathematician answer that question?
REPLY [13 votes]: First, a comment about the property you mention. It's true that if $f$ and $g$ are measurable functions and $f = g$ almost everywhere, then
$$
\int_Ef(x)dx = \int_Eg(x)dx
$$ This won't hold if $f$ and $g$ are distributions, because as you say distributions aren't functions. So the integral notation is just that - it's notation. Now...
A1. The answer to your first question comes from wanting the theory of distributions to functionally "look like" the theory of nicer linear functionals, i.e. we were really good at manipulating integrals, so we wanted the new thing to operationally work the same. For physicists, who were used the the Riesz representation theorem, this meant wanting to think of linear functionals as being just integrals against a fixed function. So, hence the inner product/integral notation
$$
F(\varphi) = \langle F,\varphi\rangle = \int F(x)\varphi(x)dx
$$
Note that in distribution theory, this is not a standard Lebesgue integral - it's just formal notation. The integral notation is ubiquitous though - it's not just linear functionals that are written this way, but also linear operators
$$
g(x) = \int k(x,y)f(y)dy
$$ There is even a sort of generalization of the Riesz representation theorem called the Schwartz Kernel Theorem that says that any (nice enough) linear operator $g = Ky$ can be written using an "integral" like that, but where the kernel function $k(x,y)$ is possibly a distribution. The moral of the story is, you should extend your understanding of the integral notation to include other linear operations, not just integration of standard functions. Once you've proved that all the usual operations that you're used to, like integration by parts, make sense with distributions, you'll see that using the integral notation is very natural and 100% rigorous - as long as you remember that it's just notation for "apply the linear operation specified".
A2. It isn't fatal at all to think of the delta function in this way - in fact this is a preferred method to define the delta function and many other distributions. The rect function represents a sort of local averaging, and you can think of the delta function as being an "infinitely local" averaging (i.e. sampling). The one thing I would recommend is looking into "approximations to the identity" - the rect function construction is just one possible construction, and in order to show that the delta function is uniquely defined, one should show that any similar sequence of approximate deltas also gives the same result (e.g. triangles, Gaussians, etc). In other words, you could either define the delta function as "that linear functional such that $F(\varphi) = \varphi(0)$, in which case you need to show that this is a well-defined, bounded linear operation on some function space, or alternately you could define the delta function as "the limit of $\langle \delta_\epsilon,\varphi\rangle$ as $\epsilon\rightarrow 0$, in which case you still need to prove that this is a well-defined, linear operation on some function space. Either way, the result is the same.<|endoftext|>
TITLE: When is $X_1^{a_1} \cdots X_n^{a_n}-1$ irreducible?
QUESTION [15 upvotes]: Let $F$ be a field, and $a_1, ... , a_n \geq 1$ integers. When is the polynomial $$f = X_1^{a_1} \cdots X_n^{a_n}-1$$ irreducible in $F[X_1, ... ,X_n]$?
I believe this should be the case if and only if $d = \gcd(a_1, ... , a_n) = 1$. At least for $F = \mathbb{C}$, the examples I've computed indicate this to be true.
If $d > 1$, then $f$ is not irreducible, since $$f = (X_1^{a_1/d} \cdots X_n^{a_n/d} - 1)[\sum\limits_{i=0}^{d-1} (X_1^{a_1/d} \cdots X_n^{a_n/d})^i]$$
I haven't been able to show the converse yet.
REPLY [4 votes]: Thanks to darij grinberg for the following approach. Here are the details written out:
(1): Let $R = F[X_1, ... , X_n], S = \{ (X_1 \cdots X_n)^t : t \geq 0 \}$. Then $f$ is irreducible in $R$ if and only if it is irreducible in $S^{-1}R = F[X_1, X_1^{-1}, ... , X_n, X_n^{-1}]$.
Since $R$ is a Noetherian unique factorization domain, so is $S^{-1}R$. If $f$ is irreducible in $R$, then it generates a prime ideal $Rf$, and the extension of this ideal to $S^{-1}R$ remains prime, because $Rf \cap S$ is empty.
Conversely, if $f$ is irreducible in $S^{-1}R$, then it generates a prime ideal $\mathfrak p$ of $S^{-1}R$, because $S^{-1}R$ is a unique factorization domain. For the inclusion map $R \rightarrow S^{-1}R$, $Rf$ is the contraction of its extension to $S^{-1}R$. This extension is $\mathfrak p$, so $Rf$ is a prime ideal of $R$, hence $f$ is irreducible.
(2): If $A = (\alpha_{ij}) \in \textrm{GL}_n(\mathbb{Z})$, then $A$ induces a bijection $\mathbb{Z}^n \rightarrow \mathbb{Z}^n$, which induces an $F$-linear automorphism $\phi$ of $S^{-1}R$, because $X_1^{a_1} \cdots X_n^{a_n} : a_i \in \mathbb{Z}$ is an $F$-basis for $S^{-1}R$. The only thing that isn't clear is that this is actually a ring homomorphism. To show this, it suffices to show that $\phi$ preserves multiplication of monomials. But this follows from the fact that $A(v+w) = Av+Aw$, where $v, w \in \mathbb{Z}^n$.
(3): If $a_1, ... , a_n$ are relatively prime integers, there is an $A \in \textrm{GL}_n(\mathbb{Z})$ which maps $(a_1, ... , a_n)$ to $(1, 0, ... , 0)$.
See this answer Information about Problem. Let $a_1,\cdots,a_n\in\mathbb{Z}$ with $\gcd(a_1,\cdots,a_n)=1$. Then there exists a $n\times n$ matrix $A$ ...
(4): The converse I mentioned in my question: since the GCD $d$ is equal to $1$, we can find an $A$ as in (3), hence a ring isomorphism $\phi$ which maps $f$ to $X_1 - 1$. Since $X_1 - 1$ is irreducible in $R$, it is so in $S^{-1}R$, hence $f$ is irreducible in $S^{-1}R$, hence in $R$.<|endoftext|>
TITLE: Is $\mathbb{R}$ a finite field extension?
QUESTION [5 upvotes]: Is there a field $K \subseteq \mathbb{R}$ with $[\mathbb{R} : K] < \infty$?
My intuition tells me no: I would imagine that $K(\alpha)$ would be missing some $n$th root of $\alpha$ for all $x \in \mathbb{R} \setminus K$. But I'm not sure how to approach this: it's certainly not true of all finite field extensions (e.g. $\mathbb{R} \subseteq \mathbb{C}$), and there are irrationals $\beta$ s.t. $\beta$ has arbitrarily (but finitely) many roots in $\mathbb{Q}(\beta)$: e.g. $n$th powers of numbers satisfying $x^n - x - 1$.
REPLY [2 votes]: If such a $K$ exists, $[C:K]$ is finite where $C$ is the field of complex numbers, you can apply the Artin-Shreir theorem to show that $K=R$.
https://mathoverflow.net/questions/8756/examples-of-algebraic-closures-of-finite-index/8759#8759<|endoftext|>
TITLE: Express $1 + \frac {1}{2} \binom{n}{1} + \frac {1}{3} \binom{n}{2} + \dotsb + \frac{1}{n + 1}\binom{n}{n}$ in a simplifed form
QUESTION [9 upvotes]: I need to express $$1 + \frac {1}{2} \binom{n}{1} + \frac {1}{3} \binom{n}{2} + \dotsb + \frac{1}{n + 1}\binom{n}{n}$$ in a simplified form.
So I used the identity $$(1+x)^n=1 + \binom{n}{1}x + \binom{n}{2}x^2 + \dotsb + \binom{n}{n}x^n$$
Now on integrating both sides and putting $x=1$.
I am getting $$\frac{2^{n+1}}{n+1}$$ is equal to the given expression.But the answer in my book is $$\frac{2^{n+1}-1}{n+1}.$$
Where does that -1 term in the numerator come from?
REPLY [4 votes]: Among $n+1$ people, a subset is randomly selected (i.e., each person will be in the subset or not with probability $1/2$). Then one person in the subset (if it is nonempty) is selected at random to win a prize. What's the probability that I (one of the $n+1$ people) win it?
There are $\binom{n}{k}$ ways to pick a subset of size $k+1$ that contains me; the probability of that subset is $\frac{1}{2^{n+1}}$, and the probability that I am the one selected is $\frac{1}{k+1}$. So the desired probability is
$$
\frac{1}{2^{n+1}} \sum_{k=0}^n \frac{1}{k+1} \binom{n}{k}. \tag{1}
$$
On the other hand, everyone out of the $n+1$ has an equal chance of winning, and there is only a $\frac{1}{2^{n+1}}$ chance of no one being selected, so the probability is
$$
\frac{2^{n+1} - 1}{2^{n+1}} \cdot \frac{1}{n+1}. \tag{2}
$$
Thus (1) and (2) are equal, and if we multiply by $2^{n+1}$ we get
$$
\sum_{k=0}^n \frac{1}{k+1} \binom{n}{k}
= \frac{2^{n+1} - 1}{n+1}.
$$<|endoftext|>
TITLE: $\mathbb R^2$ is not homeomorphic to $\mathbb R^3$.
QUESTION [9 upvotes]: I was reading Munkres for Topology. In that, it is mentioned that $\mathbb R$ is not homeomorphic to $\mathbb R^2$ as deleting a point from both makes the first one disconnected while the latter one still remains connected.
Can't we say on the same lines that $\mathbb R^2$ is not homeomorphic to $\mathbb R^3$ as deleting a line from both makes the first one disconnected but second one is still connected.
Where I'm going wrong?
EDIT
I know we can show them to be non homeomorphic using simple connectedness, but I want to find the error in that approach.
REPLY [9 votes]: Key fact: "$x$ is a point in topological space $X$" is a property invariant under homeomorphism, whereas "$x$ is a line in topological space $X$" (whatever definition of line you take) is not invariant under homeomorphism.
The argument that $\mathbb{R}$ and $\mathbb{R}^2$ are not homeomorphic goes through because, supposing $\mathbb{R}$ were somehow homeomorphic to $\mathbb{R}^2$ we know the removed point in $\mathbb{R}$ corresponds to a point in $\mathbb{R}^2$. But not so with a line in $\mathbb{R}^2$ corresponding to a line in $\mathbb{R}^3$.
Of course, you could instead of saying remove a line from $\mathbb{R}^2$, remove a copy of $\mathbb{R}$ (a subspace homeomorphic to $\mathbb{R}$. This is now invariant under homeomorphism, however, you have essentially the same problem: a copy of $\mathbb{R}$ in $\mathbb{R}^3$ need not be anything nice, and certainly need not be a line.
Another way of saying this is that the "line" in $\mathbb{R}^3$ can't be chosen by you; it has to be the image of the line in $\mathbb{R}^2$ under the homeomorphism.
To show topological spaces $X$ and $Y$ are not homeomorphic, it certainly is not sufficient to remove the same subspace from both (that you pick) and show that the results are not homeomorphic spaces. A counterexample is deleting the open interval $(0,1)$ from $\mathbb{R}$ and $(0,2)$ (homeomorphic topological spaces); the former becomes disconnected; the latter remains connected.<|endoftext|>
TITLE: What's the upper bound for sofa problem?
QUESTION [9 upvotes]: I have seen a claim that for the sofa problem, an upper bound for the area of a sofa is $2 \sqrt 2$, and that this can be proved by a "simple" argument. But I can't find a proof. What that argument?
(The articles and papers I found all seem to point back to On the enfeeblement of mathematical skills by modern mathematics and by similar soft intellectual trash in schools and universities, but that paper gives this bound as a problem, and I can't find the solution.)
REPLY [6 votes]: It does not seem useful to me to just repeat the solution in e.g. Ian Stewart's book Another Fine Math You've Got Me Into... as linked from the Wikipedia Moving sofa problem page, so I'll just describe how I understood it.
(Note: This is take 2. Edited on 2017-12-12 to fix the +2 to -2 in the formula, and to expand a bit on how the areas are calculated.)
Consider the situation when the sofa is rotated 45 degrees with respect to the corner. In order to fit into the straight part of the corridor, the sofa must lie within the shaded area (corresponding to a straight corridor of width 1, denoted by dashed lines) in the illustrations below.
We can parametrize the different possibilities by the distance $h$ between the outer corner and the part that fits into the straight corridor.
There are two basic cases:
The sofa fits within the corner triangle, $0 \le h \lt \sqrt{2} - 1$:
Note that the area of both triangles is $(2 h + 2)(h + 1)/2 = h^2 + 2 h + 1$, the area of the upper, white triangle is $2hh/2 = h^2$.
In this case, the area is $$A_1(h) = h^2 + 2 h + 1 - h^2 = 2 h + 1$$
which reaches its maximum $2 \sqrt{2} - 1 \approx 1.828427$ at the upper limit, $h = \sqrt{2} - 1$.
The sofa does not fit into the corner triangle; $\sqrt{2} - 1 \le h \le \sqrt{2}$:
Note that the area of the triangle above the blue line is $2\sqrt{2}\sqrt{2}/2 = 2$, the area of the white triangle is $2 h h / 2 = h^2$, and the area of each of the parallelograms below the blue line is $\sqrt{2}(1 - (\sqrt{2} - h)) = \sqrt{2} h + \sqrt{2} - 2$.
In this case, the shaded area, the area the sofa can possibly occupy, is
$$A_2(h) = (2 - h^2) + 2 (\sqrt{2} h + \sqrt{2} - 2) = 2\sqrt{2} - 2 + 2\sqrt{2} h - h^2$$
which reaches its maximum, $2\sqrt{2} \approx 2.828427$, at the upper end of the range, $h = \sqrt{2}$.
Note that $h \lt 0$ makes no physical sense (we'd just be rejecting part of the available width for the sofa for zero gain), and that $h \gt \sqrt{2}$ means the sofa would split into two in the cornering.
In short, the maximum area the sofa can occupy, as a function of $h$, is
$$A(h) = \begin{cases}
0, & h \lt -1 \quad \text{(no sofa at all)} \\
(h + 1)^2, & -1 \le h \lt 0 \\
2 h + 1, & 0 \le h \lt \sqrt{2} - 1 \\
2\sqrt{2} - 2 + 2\sqrt{2} h - h^2, & \sqrt{2} - 1 \le h \le \sqrt{2} \\
2 \sqrt{2}, & h \gt \sqrt{2} \quad \text{ but split in two } \end{cases}$$
which reaches a maximum at $h = \sqrt{2}$ (still connected by an infinitesimally thin part at the middle), with area $2 \sqrt{2}$:
Indeed, all sofa variants, convex and concave, and even cars (that turn both ways), must reside within the shaded area -- just with different values of $0 \le h \le \sqrt{2}$. If they do not, they either do not fit into the straight corridor, or they do not fit into the corner at a 45 degree angle.
This means the maximum area for the moving sofa (and cars) cannot exceed $2\sqrt{2}$.
(Note that because we only consider the non-rotated and rotated-by-45-degrees cases, this does not prove that there is a sofa (or car) with area $2\sqrt{2}$ that can turn the corner. We only proved that no sofa (or car) cannot be larger than $2\sqrt{2}$ in area, because it would be impossible for a sofa (or car) to fit into the corridor when rotated by 45 degrees.)<|endoftext|>
TITLE: Euclidea 3 9.8 Chord Trisection
QUESTION [5 upvotes]: Construct a chord of the larger circle through the given point (on circumference of larger circle) that is divided into three equal segments by the smaller circle (circles are concentric)
I'm having trouble finding a method to solve this geometric problem (construction). Google translation of a Russian discussion suggested using Thales theorem and parallels. Any thoughts?
REPLY [2 votes]: Here is the solution to the problem for those interested. Thank you for the help people.<|endoftext|>
TITLE: Concerning Groups having the property that intersection of any two non-trivial subgroups is non-trivial
QUESTION [11 upvotes]: The group of rational numbers $(\mathbb Q,+)$ has an interesting property , that the intersection of any two non-trivial subgroups of this group is non-trivial . Let us call this property the " non-trvial intersection property " or NIP in short . Now it is easy to see that this NIP property is invariant under group isomorphism , so if $G$ is a group having NIP , then $G$ cannot be isomorphic with $H \times K$ (because if $|H|,|K|>1$ , then $\{e_H\} \times K , H\times \{e_K\}$ are non-trivial subgroups of $H \times K$ with trivial intersection ) for any groups $H$ and $K$ .
I am looking for more examples of groups having NIP , does there exist infinitely many non-isomorphic such groups ? Also , have this kind of groups been studied ? Any reference or link will also be very helpful . Thanks in advance
NOTE : All groups considered are to be meant with more than one element
REPLY [2 votes]: As examples of finite groups with this property let me add the (generalized) quaternion groups. The quaternion group $Q_8=\{\pm1,\pm i,\pm j,\pm k\}$ occurs often enough in introductory courses. It has the given property because $-1$ is the only element of order two, and thus is contained in every non-trivial subgroup.
The generalied version $Q_{2^n}, n>3,$ is defined by
$$
Q_{2^n}=\langle a,b\mid a^{2^{n-2}}=b^2, b^4=1, bab^{-1}=a^{2^{n-2}+1}\rangle.
$$
It is easier for me to think of this as the group generated by the complex matrices
$$
A=\left(\begin{array}{cc}\zeta&0\\0&\zeta^{-1}\end{array}\right),
\qquad
B=\left(\begin{array}{cc}0&1\\-1&0\end{array}\right),
$$
where $\zeta$ is a primitive root of unity of order $2^{n-1}$.
Again, the element $-I_2=B^2=A^{2^{n-2}}$ is the only element of order two, and is thus contained in all the non-trivial subgroups.<|endoftext|>
TITLE: Universal property of a monoid ring expressed as an adjunction?
QUESTION [5 upvotes]: Given a (not necessarily commutative) ring $R$ and a monoid $M$, we can form the monoid ring $R[M]$ in the same way as we form the group ring. I would expect the monoid ring construction to be left adjoint to some "forgetful" functor, probably sending a ring to its underlying multiplicative monoid, but the universal property given in the wikipedia article suggests that it is not so simple, since it involves a pair of homomorphisms and a commutativity condition. This suggests to me that maybe it involves functors (into or out of) a coslice category (perhaps rings under R?).
What would be the correct way of stating this universal property as an adjunction? I am just curious.
REPLY [3 votes]: Christian Sievers's comment has the right idea. Fix the ring $R$. An $R$-algebra is an $R$-bimodule, $M$, equipped with a map $\times:M\otimes M\rightarrow M$ which is associative and has an identity element. Note that $R[M]$ is not just a ring but actually an $R$-algebra.
Every $R$-algebra has an underlying multiplicative monoid given by forgetting about the $R$-module structure and only remembering $\times$. The functor $R[-]$ is left adjoint to this forgetful functor.
Abelian groups are exactly the same thing as $\mathbb Z$-modules. Similarly, rings are the same as $\mathbb Z$-algebras (because their underlying additive groups are $\mathbb Z$-modules, and their multiplication gives $\times$). Therefore the left adjoint to the forgetful functor which takes any ring and gives its multiplicative monoid is the functor $\mathbb Z[-]$.<|endoftext|>
TITLE: Can there be only one extension to the factorial?
QUESTION [5 upvotes]: Usually, when someone says something like $\left(\frac12\right)!$, they are probably referring to the Gamma function, which extends the factorial to any value of $x$.
The usual definition of the factorial is $x!=1\times2\times3\times\dots x$, but for $x\notin\mathbb{N}$, the Gamma function results in $x!=\int_0^\infty t^xe^{-t}dt$.
However, back a while ago, someone mentioned that there may be more than one way to define the factorial for non-integer arguments, and so, I wished to disprove that statement with some assumptions about the factorial function.
the factorial
is a $C^\infty$ function for $x\in\mathbb{C}$ except at $\mathbb{Z}_{<0}$ because of singularities, which we will see later.
is a monotone increasing function that is concave up for $x>1$.
satisfies the relation $x!=x(x-1)!$
and lastly $1!=1$
From $3$ and $4$, one can define $x!$ for $x\in\mathbb{N}$, and we can see that for negative integer arguments, the factorial is undefined. We can also see that $0!=1$.
Since we assumed $2$, we should be able to sketch the factorial for $x>1$, using our points found from $3,4$ as guidelines.
At the same time, when sketching the graph, we remember $1$, so there can be no jumps or gaps from one value of $x$ to the next.
Then we reapply $3$, correcting values for $x\in\mathbb{R}$, since all values of $x$ must satisfy this relationship.
Again, because of $1$, we must re-correct our graph, since having $3$ makes the derivative of $x!$ for $x\in\mathbb N$ undefined.
So, because of $1$ and $3$, I realized that there can only be one way to define the factorial for $x\in\mathbb R$.
Is my reasoning correct? And can there be only one extension to the factorial?
Oh, and here is a 'link' to how I almost differentiated the factorial only with a few assumptions, like that it is even possible to differentiate.
Putting that in mind, it could be possible to define the factorial with Taylor's theorem?
REPLY [5 votes]: The Bohr-Mollerup Theorem states that the Gamma function is the only log-convex function which satisfies $\Gamma(1)=1$ and $x\Gamma(x)=\Gamma(x+1)$. There are a continuum of functions that are not log-convex that satisfy the other two constraints.
This answer details some of the important points of this.<|endoftext|>
TITLE: Does fundamental group distinguish between any two non homeomorphic topological space?
QUESTION [5 upvotes]: I am new to fundamental group.
I was reading Munkres and found that need of fundamental group was to distinguish between non-homeomorphic topological spaces.
So my question is, does fundamental group distinguish between any two non-homeomorphic topological space?
Or there exist some spaces which are non-homeomorphic but their fundamental groups are same?
My intution says it's a successful tool to distinguish between them.
Thanks in advance.
REPLY [9 votes]: The fundamental group does not, in fact, distinguish spaces up to homeomorphism.
For a simple example of this, each of the following spaces have trivial fundamental group, yet no two are homeomorphic:
The real line, $\mathbb{R}$.
The "Long line" https://en.wikipedia.org/wiki/Long_line_(topology).
The plane $\mathbb{R}^2$.
The one-point space.
The 2-sphere $\{(x, y, z)\in\mathbb{R}^3: x^2+y^2+z^2=1\}$.<|endoftext|>
TITLE: Algebraic Structures that do not respect isomorphism
QUESTION [26 upvotes]: One of the first things a student learn in Algebra is isomorphism, and it seems many objects in algebra are defined up to isomorphism.
It then comes as a mild shock (at least to me) that quotient groups do not respect isomorphism, in the sense that if $G$ is a group, and $H$ and $K$ are isomorphic normal subgroups, $G/H$ and $G/K$ may not be isomorphic. (see Isomorphic quotient groups)
My two questions are:
1) What other algebraic "structures" or "operations" do not respect isomorphism?
2) Philosophical (or heuristically), why are there algebraic structures that do not respect isomorphism? Is this supposed to be surprising or not surprising? To me $G/H$ not isomorphic to $G/K$, even though I understand the counterexample, is as surprising as $\frac{2}{1/2}\neq\frac{2}{0.5}$.
Thanks for any help!
REPLY [2 votes]: Perhaps this is the Galois-theoretic analogue for fields of the example for groups. $A={\bf Q}(\root4\of2)$ and $B={\bf Q}(i\root4\of2)$ are isomorphic as fields, but not as extensions of $C={\bf Q}(\sqrt2)$. That is, there is no field isomorphism of $A$ and $B$ fixing $C$.<|endoftext|>
TITLE: On the difference between $\textbf{R}^{\{1,2,...,n\}}$, $\textbf{R}^{\{1,2,...,n+1\}}$, $\textbf{R}^{[0, 1]}$, and $\textbf{R}^\infty$
QUESTION [5 upvotes]: I'm working my way through Axler's "Linear Algebra Done Right" (3rd ed.), and I'm getting stuck on section 1.23, which says:
If $S$ is a set, then $\textbf{F}^S$ denotes the set of functions from $S$ to $\textbf{F}$.
For $f, g \in \textbf{F}^S$, the sum $f + g \in \textbf{F}^S$ is the function defined by
$$(f + g)(x) = f(x) + g(x)$$
for all $x \in S$.
For $\lambda \in \textbf{F}$ and $f \in \textbf{F}^S$, the product $\lambda f \in \textbf{F}^S$ is the function defined by
$$(\lambda f)(x) = \lambda f(x)$$
for all $x \in S$.
As an example of the notation above, if $S$ is the interval [0,1] and $\textbf{F} = \textbf{R}$, then $\textbf{R}^{[0,1]}$ is the set of real-valued functions on the interval [0,1].
In the next paragraph, the author goes on to assert the following:
Our previous examples of vector spaces, $\textbf{F}^n$ and $\textbf{F}^\infty$, are special cases of the vector space $\textbf{F}^S$ because a list of length $n$ of numbers in $\textbf{F}$ can be thought of as a function from {1, 2, ..., $n$} to $\textbf{F}$ and a sequence of numbers in $\textbf{F}$ can be thought of as a function from the set of positive integers to $\textbf{F}$. In other words, we can think of $\textbf{F}^n$ as $\textbf{F}^{\{1,2,...,n\}}$ and we can think of $\textbf{F}^\infty$ as $\textbf{F}^{\{1,2,...\}}$.
It's at this point I get confused, due mostly to the example which relies on $\textbf{R}^{[0,1]}$. It seems to me that then number of elements in $\textbf{R}^{[0,1]}$ should be uncountably infinite and that I should be able to generate any value between $-\infty$ and $\infty$ from [0,1] using some member of $\textbf{R}^{[0,1]}$, which feels a whole lot like generating a point in $\textbf{R}^\infty$.
If that's the case, then what's the difference between a tuple generated from $\textbf{R}^{\{1,2,...,n\}}$ and one generated from $\textbf{R}^{\{1,2,...,n+1\}}$?
I think my difficulty lies in not understanding implicit restrictions on the notation. The answer to this specific sub-question may be a shortcut to understanding: Suppose I'm trying to think of a particular point in $(x,y,z)\epsilon\textbf{R}^3$ as $\textbf{R}^{\{1,2,3\}}$ where $f,g,h\epsilon\textbf{R}^{\{1,2,3\}}$. Must I think of that point in $\textbf{R}^3$ as $(f(1)=x, f(2)=y, f(3)=z)$, or can I think of it as $(f(1)=x, g(2)=y, h(3)=z)$?
REPLY [4 votes]: It's the former. The point normally written $(2, -3, 5)$ corresponds, in this book's description, the function $f: \{1, 2, 3\} \to \mathbb R$ with $f(1) = 2, f(2) = -3,$ and $f(3) = 5$.<|endoftext|>
TITLE: Is a negative number squared negative?
QUESTION [16 upvotes]: $-3^2 = -9$
I found this problem while doing an algebra refresher in the book The Complete Idiot's Guide to Algebra. I asked my engineer brother this problem and he got it wrong. A search on Google for why is a negative number squared negative I get conflicting results.
Google presents an excerpt from a site that says the converse. "This is because to square a number just means to multiply it by itself. For example, $(-2)$ squared is $(-2)(-2) = 4$. Note that this is positive because when you multiply two negative numbers you get a positive result." - This, of course, is the exact opposite of what was asked, but it's the given response.
The third item on Google's search results offered up a math forum where the moderator, one Doctor Rick, states that whether it is interpreted as -3^2 or -(3)^2 is a difference of opinion. It's math. How can it be a matter of opinion? If an equation is being used for calculating a space craft landing, or the engineering of a bridge design, a difference of opinion on how to calulate this could prove catastrophic.
The high school math teacher that authored The Complete Idiot's Guide to Algebra presented this question as "be careful, this one is tricky" specifically to teach this situation, but since there seems to some confusion as to which is the right way to calculate this.
My scientific calculator tells me it is 9. Another question here on SE regarding calculators with this same issue, the accepted answer was that adding parentheses fixed the "issue", but doesn't address whether the calculator is getting it "wrong" because it's not actually wrong.
What is the correct answer, and why? Thanks!
REPLY [4 votes]: IMO it helps a lot to understand how syntax of programming languages, and in a less straighforward way also maths notation, always correspends to a tree data structure. For instance, $f(g(x), h(y,z))$ is really a character-string encoding for something like
$$
\begin{matrix}
& & f & &
\\& \nearrow & & \nwarrow &
\\ g & & & & h
\\ \uparrow & & & \nearrow & \uparrow
\\ x & & y & & z
\end{matrix}
$$
The term $-3^2$, or the Python expression -3**2, means
$$
\begin{matrix}
& & -\square\quad & &
\\ & & \uparrow & &
\\ & & ** & &
\\& \nearrow & & \nwarrow &
\\ 3 & & & & 2
\end{matrix}
$$
It does not mean
$$
\begin{matrix}
& & ** & &
\\& \nearrow & & \nwarrow &
\\ -\square & & & & 2
\\ \uparrow\!\!\!\!\! & & & &
\\ 3 \!\!\!\!\! & & & &
\end{matrix}
$$
Why not? Well, these are just the conventions for how expressions are parsed: exponentiation binds more tightly than negation (which is, kinda reasonably, on the same level as addition).
OTOH, if you write in C# Math.pow(-3, 2), then this clearly is parsed as
$$
\begin{matrix}
& & \mathrm{pow} & &
\\& \nearrow & & \nwarrow &
\\ -3 & & & & 2
\end{matrix}
$$
which is a different calculation and gives the result $9$. To express $-3^2$ in C#, use - Math.pow(3,2).
In programming languages, the parsing rules are generally these:
Parentheses group a subtree together, no matter what happens around them. Function application is typically connected to parenthesis, so this also binds tighly.
Commata always separate independent subtrees. Hence the -3 in pow(-3,2) is independent of the 2 and the pow function.
All other infix operators, like + and **, have some predefined fixity. For instance, in C and C++ the operator-precendence hierarchy includes the following:
<, <=, >, >=
<<, >>
+, -
*, /, %
so when the expression pow(0+(-1)*3, 2) is encountered, the parser first splits it up at the comma, then at the +, then at the *, before considering the inner parenthesis.But in languages with an exponentiation operator, this should, as in maths notation, have a higher fixity than the other operators.
These parsing rules may subtly vary between different programming languages, but at least for a single language they must always be well-specified.
Alas, in maths it's often not so clear-cut – for some expressions it is indeed up to interpretation what they mean! For instance, does $\sin x^2$ mean $(\sin x)^2$ or rather $\sin(x^2)$? IMO it should be the former (because function application binds tightly), but I think the majority of mathematicians and scientist don't agree, and hence the completely ridiculous notation $\sin^2 x$ is used for that.
Oh well...<|endoftext|>
TITLE: Prove $\int_{\frac{\pi}{20}}^{\frac{3\pi}{20}} \ln \tan x\,\,dx= - \frac{2G}{5}$
QUESTION [8 upvotes]: Context: This question
asks to calculate a definite integral which turns out to be equal to $\displaystyle 4 \, \text{Ti}_2\left( \tan \frac{3\pi}{20} \right) -
4 \, \text{Ti}_2\left( \tan \frac{\pi}{20} \right),$ where $\text{Ti}_2(x) = \operatorname{Im}\text{Li}_2( i\, x)$ is the
Inverse Tangent Integral function.
The source for this integral is this question on brilliant.org.
In a comment, the OP
claims that the closed form can be further simplified to $-\dfrac\pi5 \ln\left( 124 - 55\sqrt5 + 2\sqrt{7625 - 3410\sqrt5} \right) + \dfrac85 G$.
How can we prove that?
I have thought about using the formula $$\text{Ti}_2(\tan x) = x \ln \tan x+ \sum_{n=0}^{\infty} \frac{\sin(2x(2n+1))}{(2n+1)^2}. \tag{1}$$
but that only mildly simplifies the problem.
Equivalent formulations include:
$$\, \text{Ti}_2\left( \tan \frac{3\pi}{20} \right) -
\, \text{Ti}_2\left( \tan \frac{\pi}{20} \right) \stackrel?= \frac{ \pi}{20} \ln \frac{ \tan^3( 3\pi/20)}{\tan ( \pi/20)} + \frac{2 G}{5} \tag{2}$$
$$ \sum_{n=0}^{\infty} \frac{\sin \left(\frac{3\pi}{10}(2n+1) \right)- \sin \left(\frac{\pi}{10}(2n+1)\right)}{(2n+1)^2} \stackrel?=\
\frac{2G}{5} \tag{3}$$
$$\int_{\pi/20}^{3\pi/20} \ln \tan x\,\,dx \stackrel?= - \frac{2G}{5} \tag{4}$$
A related similar question is this one.
REPLY [6 votes]: Let $J(a)=\int_{\frac{\pi}{20}}^{\frac{3\pi}{20}}\tanh^{-1}\frac{2a\cos2x}{1+a^{2}}dx$ and evaluate
\begin{align}
J’(a) &= \int_{\frac\pi{20}}^{\frac{3\pi}{20}}\frac{2(1-a^2)\cos2x}{a^4+1-2a^2\cos4x}dx=\frac1{2a}{\left.\tan^{-1}\frac{2a\sin2x}{1-a^2}\right|_{\frac{\pi}{20}}^{\frac{3\pi}{20} } }\\
&=\frac1{2a}\tan^{-1}\frac{a-a^5}{1+a^6}
= \frac1{2a}(\tan^{-1}a - \tan^{-1}a^5)
\end{align}
where $\sin\frac{3\pi}{10}-\sin\frac{\pi}{10}=\frac12$ and $\sin\frac{3\pi}{10}\sin\frac{\pi}{10}=\frac14$ are recognized. Then
\begin{align}
\int_\frac\pi{20}^\frac{3\pi}{20}\ln(\tan x)~dx
&= -\int_\frac\pi{20}^\frac{3\pi}{20}\tanh^{-1}(\cos2x)dx =-J(1)
=-\int_0^1 J’(a)da \\
& =-\frac12 \int_0^1\left(\frac{\tan^{-1}a}{a}\right.
-\underset{a^5\to a}{\left.\frac{\tan^{-1}a^5}{a}\right)}da\\
&=-\left(\frac12-\frac1{10}\right) \int_0^1\frac{\tan^{-1}a}{a}da=-\frac25 G
\end{align}<|endoftext|>
TITLE: Fibonacci summation
QUESTION [9 upvotes]: Can anyone help me to prove the following relation.
$$\sum_{k=1}^{\infty} \frac{F_{2k}H^{(2)}_{k-1}}{k^2\binom{2k}{k}}=\frac{2\pi^4}{375\sqrt{5}}$$
I was studying recently about Fibonacci and Lucas numbers.
And I came through the above relationship. I tried applying golden ratio but nothing works.
Symbols have their usual meanings.
REPLY [8 votes]: It is well known that $$\sum_{k=1}^{\infty}\frac{x^{2k}}{k^{2}\dbinom{2k}{k}}H_{k-1}^{\left(2\right)}=\frac{2\arcsin^{4}\left(\frac{x}{2}\right)}{3},\,\left|x\right|\leq2$$ (see here or here for a proof), then from the Binet formula we get $$\sum_{k=1}^{\infty}\frac{F_{2n}}{k^{2}\dbinom{2k}{k}}H_{k-1}^{\left(2\right)}=\frac{1}{\sqrt{5}}\sum_{k=1}^{\infty}\frac{\left(1+\sqrt{5}\right)^{2n}}{2^{2n}k^{2}\dbinom{2k}{k}}H_{k-1}^{\left(2\right)}-\frac{1}{\sqrt{5}}\sum_{k=1}^{\infty}\frac{\left(1-\sqrt{5}\right)^{2n}}{2^{2n}k^{2}\dbinom{2k}{k}}H_{k-1}^{\left(2\right)}$$ $$=\frac{2\arcsin^{4}\left(\frac{1+\sqrt{5}}{4}\right)}{3\sqrt{5}}-\frac{2\arcsin^{4}\left(\frac{1-\sqrt{5}}{4}\right)}{3\sqrt{5}}=\color{red}{\frac{2\pi^{4}}{375\sqrt{5}}}.$$<|endoftext|>
TITLE: Lusin's theorem from Rudin RCA
QUESTION [5 upvotes]: Hi! Let me ask questions on Lusin's theorem from Rudin's RCA.
$1)$ As we know $s_n=\varphi_n \circ f$ (from Theorem 1.17) then $$2^nt_n(x)=2^n(s_n(x)-s_{n-1}(x))=2^n(\varphi_n \circ f(x)-\varphi_{n-1} \circ f(x))=\dots$$ As $0\leq f<1$ then $$\dots=2^n\left(\dfrac{[2^nf(x)]}{2^n}-\dfrac{[2^{n-1}f(x)]}{2^{n-1}}\right)=[2^nf(x)]-2[2^{n-1}f(x)]$$ where $[\cdot]$ - integer part. Also the last equality above can be equal $0$ or $1$ since $[2\theta]-2[\theta]\in \{0,1\}$. An it's equal to $1$ if $\theta\in \bigcup \limits_{k\in \mathbb{Z}}[k+\frac{1}{2},k+1)$. So $2^nt_n(x)=1$ if $2^{n-1}f(x) \in \bigcup \limits_{k\in \mathbb{Z}}[k+\frac{1}{2},k+1)$ $\Rightarrow$ $x \in \bigcup \limits_{k\in \mathbb{Z}}f^{-1}([2^{-(n-1)}(k+\frac{1}{2}),2^{-(n-1)}(k+1)))$. So $T_n$ is the last set and it's measurable, i.e. $T_n \in \mathfrak{M}$. Am I right?
$2)$ When he write that "$(1)$ holds if $A$ is compact and $f$ is a bounded measurable function" what does he mean about $f$? Non-negative function or not? Is non-negative then we must replace $f$ by $\alpha^{-1}f$ where $\alpha=\sup f+1$.
$3)$ Let's Go further. Let's take a look at penultimate paragraph which I marked by red line. I understood that if $A$ is any set with finite measure then by inner regularity we can find compact set $K\subset A$ such that $m(A-K)$ as small as needed. Note that $f$ outside $K$ possibly is not zero! And we can't apply above proof because there we use that $f=0$ on $A^c$.
$4)$ Also why considers sets $B_n$? Also it's not obvious to me how he got the general case.
Would be very grateful if somebody explain what he does in this paragraph. I spent about one day but no results :( In my opinion this paragraph is very brief and horrible.
REPLY [2 votes]: Let me write you a more detailed version of the second last paragraph. Suppose $f:X\to [0,+\infty)$ is a measurable function, $\mu(A) < +\infty$, and $f = 0$ outside $A$. Note that the general complex case can be decomposed to this case by considering the positive and negative parts of real and imaginary parts. What you know is that you can approximate such an $f$ on a compact set where it's bounded. Your goal is to find some compact set $K\subset A$, such that $f$ is bounded on $K$, and $\mu(A-K)$ is small.
Fix $\epsilon > 0$. Put $B_n = \{x:f(x) \geq n\}$. Then $B_{n+1}\subset B_n$ and $\bigcap B_n = \varnothing$, because $f(x)$ is finite for each $x$. You also know that $B_1 \subset A$, so $\mu(B_1)$ is finite. That means $\mu(B_n) \to 0$ as $n\to \infty$. Let $n$ be such that $\mu(B_n) <\epsilon$. Since $\mu(A-B_n)$ has finite measure, by inner regularity you can find a compact set $K\subset A-B_n$ such that $\mu(K) > \mu(A-B_n) - \epsilon$. Now you have $f$ bounded on a compact set $K$, and $\mu(A-K) < 2\epsilon$.<|endoftext|>
TITLE: How many times can strictly convex functions intersect?
QUESTION [5 upvotes]: Some time ago, I saw a post related to the number of times that two convex (and continuous) functions' graphs can meet. In general, infinitely many times: one can think, for instance, of $g(x):=x^{2}$ and $f(x)=x^{2}+\sin(x)$.
But, if $f,g:[0,+\infty)\longrightarrow [0,+\infty)$ are continuous, strictly convex and
$$
\lim_{x\rightarrow +\infty}\frac{f(x)}{g(x)}=+\infty
$$
then, at least intuitively, one can state that $f$ and $g$ meet in, at most, two points, that is, the equation $f(x)-g(x)=0$ has, at most, two solutions. What do you think?
REPLY [3 votes]: Following up from my comment, here is a counterexample where both $f$ and $g$ are convex, continuous, have the desired property $\lim_{x\to\infty}f(x)/g(x) = +\infty$, but their graphs share infinitely many common points:
$\hskip2in$
You require that the functions are strictly convex and are defined on $[0, +\infty)$, but it is not difficult to modify the above counterexample; take
$$g(x) = x^2,\text{ for} x\geq 0$$
and
$$f(x) = \begin{cases}
g(x), \text{ for } x \in [0,1],\\
x^4, \text{ otherwise}
\end{cases}
$$
Both functions are continuous, strictly convex, their values are nonnegative, and
$$
\lim_{x\to \infty}\frac{f(x)}{g(x)} = \lim_{x\to\infty}\frac{x^4}{x^2} = \infty.
$$<|endoftext|>
TITLE: Why do we call this transformation non-singular?
QUESTION [7 upvotes]: In linear algebra books, the authors call the linear transformation $T$ with the property
$$T(\alpha)=0\implies \alpha=0$$
non-singular.
What's the motivation behind the term "non-singular"?
REPLY [7 votes]: Suppose $Tv=0$ implies $v=0$. It follows immediately that $T$ is injective since if $Tv_{1}=Tv_{2}$ then $Tv_{1}-Tv_{2}=T(v_{1}-v_{2})=0$ and hence $v_{1}-v_{2}=0$, or equivalently $v_{1}=v_{2}$. Therefore, we can unambiguously define $T^{-1}$ on the range of $T$.
The author might be calling $T$ nonsingular because it "comes with" a well-defined inverse (at least on the range of $T$).
Edit As QiaochuYuan points out, this could be considered bad terminology. Usually nonsingular is reserved for (in addition to $T$ being injective) when the range of $T$ is the whole codomain.<|endoftext|>
TITLE: If $G$ a finite $p$-group s.t $G/[G;G]$ cyclic then $G$ abelian.
QUESTION [5 upvotes]: Let $G$ be a finite $p$-group and $[G,G]$ its commutator sub-group, I
need to show that if the group quotient $G/[G,G]$ is cyclic then $G$
is an abelian group.
My attempt is to let $g\in G$ s.t $g$ modulo $[G,G]$ be a generator of
the group quotient, and thus for all elements $y$ of $G$ we have $y=g^m$
modulo $[G,G]$, that is $y=g^mu$ for some $u\in [G,G]$. So for
arbitrary $x,y\in G$, I must to prove that $xy=yx$. In writing
$x=g^nv$ and $y=g^mu$, I get $xy=g^nvg^mu$ and $yx=g^mug^nv$
Here I find myself stuck and I don't know what can I do, also I may be stuck because I do not see where I can use the
assumption that $G$ is $p$-group, it may be that this path is solely
based on the definitions and the universal property of the
commutator group fails, could you help me to achieve this? Thanks in advance for all participation.
REPLY [3 votes]: This comes because of important (characterizing) property of $p$-groups (nilpotent groups): in a $p$-group, every maximal subgroup is of prime index and normal.
Suppose $G$ is non-abelian. We show that $G/[G,G]$ is not cyclic. Let $M$ be a maximal subgroup of $G$. It can not be unique (otherwise, choosing $x\in G\setminus M$, the subgroup $\langle x\rangle$ will be certainly in some maximal subgroup, but by uniqueness of $M$, we will have $\langle x\rangle=G$, but $G$ is non-cyclic).
Let $M'$ be another maximal subgroup. Now, both $M,M'$ are normal with $G/M$ and $G/M'$ isomorphic to cyclic group of order $p$, hence $[G,G]$ is contained in $M$ as well as $M'$.
Thus, in $G/[G,G]$, there are at least two maximal subgroups - $M/[G,G]$ and $M'/[G,G]$. This means $G/[G,G]$ can not be cyclic.<|endoftext|>
TITLE: $f:\mathbb{R}\to\mathbb{R}$, continuous, such that $xf(x)>0$ when $x\neq 0$. Show that $f(0)=0$
QUESTION [6 upvotes]: I need to prove:
$f:\mathbb{R}\to\mathbb{R}$, continuous, such that $xf(x)>0$ when $x\neq 0$. Show that $f(0)=0$. Show that if we remove the continuity this result will fail. Give an example.
For an example, I thought of $f(x) = x$. This is functinuous, and $xf(x) = x^2>0$ when $x\neq 0$, but I can't make a non continuous version of this function to try. Maybe $f(x) = \frac{x^2-2x}{x-2}$? This functions is not continuous at $x=0$ but it's equal to $x$ everywhere except at $x=0$, so $xf(x)>0$ stills valid, but instead of having $f(0)\neq 0$ we don't even have a definition for $x=0$. Maybe if I define it with any number for $x=0$ it works?
Also, how to prove such result?
I tried considering:
$$g(x) = f(x)-x$$
Somehow if I assume $xf(x)>0$ I need to prove $g(0) = 0$, maybe using the intermediate value theorem. Any ideas? I can't see how $xf(x)>0$ helps.
REPLY [4 votes]: $xf(x) > 0$ implies that $f(x) > 0$ when $x$ is positive and $f(x) < 0$ when $x$ is negative.
By intermediate value theorem, we must have $f(c) = 0$ at some value between a negative and positive number, since we know it is non-zero everywhere else it must occur at zero.<|endoftext|>
TITLE: General solution of Pell's equation
QUESTION [7 upvotes]: If we know the minimal solution or any specific solution of Pell's equation $x^2-ny^2=1$ , is there is any general formula to write all solution of Pell's equation?
REPLY [2 votes]: Ummm; if you have $u,v$ the smallest positive (nonzero) solution to $u^2 - n v^2 = 1,$ then as in the answer by Stefan4024, all solutions are obtained from the $(1,0)$ solution by the mapping
$$ (x,y) \mapsto (ux + nvy, \; v x + u y). $$
The Cayley-Hamiltion Theorem says that we can write separate linear recurrences for $x,y.$ That is
$$ x_0 = 1, x_1 = u, x_2 = 2 u^2 - 1,$$
and
$$ x_{n+2} = 2 u x_{n+1} - x_n. $$
$$ y_0 = 0, y_1 = v, y_2 = 2 u v,$$
and
$$ y_{n+2} = 2 u y_{n+1} - y_n. $$
The full proof that this gives all solutions is rather long. Maybe I should just draw attention to alternatives. All solutions to $x^2 - 5 y^2 = 6061$ make up a rather more complicated set; in the sense discussed above, there are eight orbits, not just one.
jagy@phobeusjunior:~$ ./Pell_Target_Fundamental
Automorphism matrix:
9 20
4 9
Automorphism backwards:
9 -20
-4 9
9^2 - 5 4^2 = 1
x^2 - 5 y^2 = 6061
Sun Jul 3 14:58:33 PDT 2016
x: 79 y: 6 ratio: 13.1667 SEED KEEP +-
x: 81 y: 10 ratio: 8.1 SEED KEEP +-
x: 129 y: 46 ratio: 2.80435 SEED KEEP +-
x: 159 y: 62 ratio: 2.56452 SEED KEEP +-
x: 191 y: 78 ratio: 2.44872 SEED BACK ONE STEP 159 , -62
x: 241 y: 102 ratio: 2.36275 SEED BACK ONE STEP 129 , -46
x: 529 y: 234 ratio: 2.26068 SEED BACK ONE STEP 81 , -10
x: 591 y: 262 ratio: 2.25573 SEED BACK ONE STEP 79 , -6
x: 831 y: 370 ratio: 2.24595
x: 929 y: 414 ratio: 2.24396
x: 2081 y: 930 ratio: 2.23763
x: 2671 y: 1194 ratio: 2.23702
x: 3279 y: 1466 ratio: 2.2367
x: 4209 y: 1882 ratio: 2.23645
x: 9441 y: 4222 ratio: 2.23614
x: 10559 y: 4722 ratio: 2.23613
x: 14879 y: 6654 ratio: 2.2361
x: 16641 y: 7442 ratio: 2.23609
x: 37329 y: 16694 ratio: 2.23607
x: 47919 y: 21430 ratio: 2.23607
x: 58831 y: 26310 ratio: 2.23607
x: 75521 y: 33774 ratio: 2.23607
x: 169409 y: 75762 ratio: 2.23607
x: 189471 y: 84734 ratio: 2.23607
x: 266991 y: 119402 ratio: 2.23607
x: 298609 y: 133542 ratio: 2.23607
x: 669841 y: 299562 ratio: 2.23607
x: 859871 y: 384546 ratio: 2.23607
x: 1055679 y: 472114 ratio: 2.23607
x: 1355169 y: 606050 ratio: 2.23607
x: 3039921 y: 1359494 ratio: 2.23607
x: 3399919 y: 1520490 ratio: 2.23607
x: 4790959 y: 2142582 ratio: 2.23607
x: 5358321 y: 2396314 ratio: 2.23607
x: 12019809 y: 5375422 ratio: 2.23607
x: 15429759 y: 6900398 ratio: 2.23607
x: 18943391 y: 8471742 ratio: 2.23607
x: 24317521 y: 10875126 ratio: 2.23607
Sun Jul 3 14:59:13 PDT 2016
x^2 - 5 y^2 = 6061
jagy@phobeusjunior:~$
jagy@phobeusjunior:~$<|endoftext|>
TITLE: Favourite problem books at university level
QUESTION [19 upvotes]: As background let me start by stating what I perceive to be the point of problem books, or to put the matter in perhaps more acceptable way, how I define problem books. A large majority of textbooks include exercises. Problem books have two distinctive features: the first is that they include at least hints and often complete solutions, ideally both; the second is that the problems are not routine exercises.
Exercises are primarily to make sure you have correctly grasped the notation, key results, key concepts etc. They are an essential test of understanding for the majority of students. On the other hand, a teacher or researcher in the field can typically solve exercises on sight. Certainly they can see how to solve them, even if some of the details might take a few minutes to work out. Most Questions on this site are exercises. The major group of exceptions (leaving aside the hopelessly ill-drafted questions) are the contest problems.
Note that the problem/exercise distinction is independent of level. Questions in several complex variable theory can be exercises, and questions about Euclidean geometry can be hard contest problems.
Contest problems have become an important subset of "problems" (under my usage of the term). However, a large majority of contest problems are at the teenage pre-university level (all the "olympiads" and similar contests). Two easily accessible exceptions are the Putnam exams and (less easily) the Miklos Schweitzer competitions.
Over the last decade or so, it has become more fashionable to publish problem books as supplementary material for university teaching. The earliest and best were perhaps the Polya/Szego Problems and Theorems in Analysis (2 vols) (still in print), and Halmos' A Hilbert Space Problem Book. As an undergraduate at Cambridge University in the late 1960s the Knopp problem books were assigned for the first complex variable course.
There are still some really excellent new(ish) problem books. Some of my favourites are Problem Book in Relativity and Gravitation (Lightman et al) - not so new (I see I bought my copy in 1977), Cosmology and Astrophysics through Problems (Padmanabhan) (purchased 1996), and Arnold's Problems (acquired 2005, but material originally published 1999).
Guy and Croft's books (Unsolved problems in ...) are not really the same. They are essentially convenient annotated references. But Chung and Graham's Erdos on Graphs is different somehow. I like it, despite my lack of progress with it.
I also like the multi-authored Algebraic Geometry, A problem solving approach, despite the fact that many problems are trivial. I found it a fast way into a field I had totally neglected. Similarly, Halmos' unique style carries me along in his more recent Linear Algebra Problem book despite the fact that many problems can be solved on sight.
Finally, I hesitate to mention names, but some of the problem books on number theory seem to me collections of exercises rather than problems.
So my questions are: (1) can anyone recommend any favourites to keep me amused on the dark winter evenings that are fast approaching ...? (2) clearly the viewpoint above is unashamedly elitist. I am worried about the future. Too many people are being brought up (right through toy problems at the PhD level) in a way that is unlikely to help them do worthwhile research. Is that viewpoint tenable? Should the best students (at least) be encouraged to tackle problems at the Putnam level as undergraduates? At the Miklos Schweitzer level? (3) is it feasible to give as homework, problems at the contest level? Or are they simply "enrichment material"? (4) The previous questions seem to be addressed primarily to teachers and researchers. What do students think? Would they like more problems? Or less.
Finally I am aware of several previous questions in this area. The closest seems to be Good problem books at a relatively advanced level? . But it is not the same question. Nor did it attract much interest :(
REPLY [5 votes]: This became too long for a comment, but it is only my viewpoint on your second and fourth question. I did include a fun problem to compensate for the long read.
I've been brought up with toy exercises as well. Even when I was 18 years old I never thought about anything to deeply. An answer either came quickly or I labeled it as too difficult and dismissed it altogether. When I was 19 years old, I first thought about how to find the roots of a quadratic equation since I had forgotten the formula. After 2 minutes or so, and writing stuff down, I 'discovered' the formula myself. I still forget the formula every now and then, but it takes ten seconds to derive the formula again.
Even though solving an actual problem feels good, I was used to toy exercises for the most part of my life. The first 3 years at university I was interested in mathematics but not used to solving anything beyond the routine exercises. I needed inspiring people and smart friends to help me on the right road and seriously think about problems. But unfortunately, 21 years of not having that mentality leaves deep scars. I am not the mathematician I could have been.
When you ask what students want, you probably are going to get two very different versions. Beginning students that have only seen toy exercises are unlikely to demand serious problems and work hard on them. You cannot really blame them since they know no better. The other answer is very likely to be a hindsight answer. In hindsight, thinking harder on certain problems and doing more work would have helped you becoming a better version of you.
Recently I started doing a Phd (with a large teaching assignment), I'm certainly not the most gifted mathematician but I do have my clear moments. Sometimes a smart insight occurs, but only after struggling with something and making my fair share of mistakes. This process is new to me and at times very hard. Having had only toy exercises myself for a long period of time certainly makes dealing with it during my phd for the first time a lot harder. This should not be the case!
More often than not do we hear old wise men say that things used to better. And, when it comes to educating mathematics, I have to agree. I see a lot of students struggling with exercises that even I did consider toy exercises myself when I was (a not so great) student myself. A vast majority did considerably less thinking than I did up to that point, and well, that's absolutely shocking. I try to motivate students and present them interesting problems that should help your overal understanding of mathematics. I try to tell the history of certain problems and the difficulties that great mathematicians had tryin to solve them. I hope that in this way I can save some students from not thinking, and not thinking is something that they were thought very well.
PS: How many numbers of the form $101$, $10101$, $1010101, \dots$ are prime?<|endoftext|>
TITLE: Is the supremum of an almost surely continuous stochastic process measurable?
QUESTION [7 upvotes]: Let's take a stochastic process $(X_t)_{0\leq t \leq 1}$ and assume that the sample paths are almost surely continuous. Let us define $S \equiv \sup_{t \in [0,1]} X_t$. How can we show that $S$ is measurable?
For example, the if we take the Brownian motion $B_t$ as our stochastic process, then given the continuity of the sample paths, we can focus on the supremum over $t \in [0,1] \cap \mathbb{Q}$, which is a countable, dense subset of $[0,1]$, and we have continuity of $B_t$, therefore the supremum is measurable (see the answer here: Measurability of the supremum of a Brownian motion).
How does almost sure continuity instead of continuity change the way of proving measurability?
I would be very grateful for any hint!
REPLY [6 votes]: We have to assume that the underlying probability space is complete; otherwise the assertion might fail.
So, suppose that $(\Omega,\mathcal{A},\mathbb{P})$ is a complete probability space and $(X_t)_{t \in [0,1]}$ a process with almost surely continuous sample paths, i.e. there exists a null set $N \in \mathcal{A}$ such that $$[0,1] \ni t \mapsto X_t(\omega)$$ is continuous for all $\omega \in \tilde{\Omega} := \Omega \backslash N$. Now
$$\tilde{X}_t(\omega) := \begin{cases} X_t(\omega), & \omega \in \tilde{\Omega}, \\ 0, & \omega \in N \end{cases}$$
defines a stochastic process on $\Omega$ with continuous sample paths, and therefore
$$\sup_{t \in [0,1]} \tilde{X}_t = \sup_{t \in [0,1] \cap \mathbb{Q}} \tilde{X}_t$$
is measurable as countable supremum of measurable random variables. On the other hand, we have
$$\tilde{S}_t(\omega) = \sup_{t \in [0,1]} \tilde{X}_t(\omega) = \sup_{t \in [0,1]} X_t(\omega)= S_t(\omega) \quad \text{for all $\omega \in \tilde{\Omega} = \Omega \backslash N$}$$
and so
$$\{S_t \in B\} = \left( \{\tilde{S}_t \in B \} \cap N^c \right) \cup \bigg( \{S_t \in B \} \cap N \bigg)$$
for any Borel set $B$. Since $N \in \mathcal{A}$ and $\tilde{S}_t$ is measurable, we know that
$$\left( \{\tilde{S}_t \in B \} \cap N^c \right) \in \mathcal{A}.$$
Moreover,
$$\left\{ S_t \in B \right\} \cap N \subseteq N$$
and since the probability space is complete, this implies
$$\left\{ S_t \in B \right\} \cap N \in \mathcal{A}.$$
Combining both considerations proves $\{S_t \in B\} \in \mathcal{A}$, and this proves the measurability of $S_t$.
Remark More generally, the following statement holds true in complete probability spaces:
Let $(\Omega,\mathcal{A},\mathbb{P})$ and $(E,\mathcal{B},\mathbb{Q})$ be two measure spaces and assume that $(\Omega,\mathcal{A},\mathbb{P})$ is complete. Let $X, Y: \Omega \to E$ be two mappings. If $X$ is measurable and $X=Y$ almost surely, then $Y$ is measurable.<|endoftext|>
TITLE: Why is this not a valid algorithm for counting poker hands?
QUESTION [6 upvotes]: In my probability class, we had an example asking what is the probability of getting three of a kind when randomly drawing $5$ cards from a typical deck ($13$ denomination and $4$ suits, total of $52$ cards). Three of a kind is defines as getting three cards with the same denomination and two other cards which each have a unique denomination (looks like $AAABC$). The correct answer is as follows:
1) Choose denomination $A$, this can be done in $\binom{13}{1}$ ways
2) Choose the 3 suits for denomination $A$, this can be done in $\binom{4}{3}$ ways
3) Choose the remaining two denominations, $B$ and $C$, this can be done in $\binom{12}{2}$ ways
4) Choose the suits for the remaining two denominations, this can be done in $\binom{4}{1}$ ways for each
So the probability of getting three of a kind is: $$\frac {\binom{13}{1}\binom{4}{3}\binom{12}{2}\binom{4}{1}\binom{4}{1}}{\binom{52}{5}} \approx 0.02112845$$
Now my solution is mostly the same, however when selecting the denominations $B$ and $C$ I say that this can be done by selecting denomination $B$ in $\binom{12}{1}$ ways with $\binom{4}{1}$ possible suits. Then selecting denomination $C$ from the remaining $11$ denominations for any $4$ suits, so my answer is:
$$\frac {\binom{13}{1}\binom{4}{3}\binom{12}{1}\binom{4}{1}\binom{11}{1}\binom{4}{1}}{\binom{52}{5}} \approx 0.0422569$$
which turns out to be exactly double the answer above, this comes from the fact that $\binom{12}{1}\binom{11}{1}=2\binom{12}{2}$. My question is, which solution is correct and why? To me they both seem like valid solutions. Am I double counting certain hands?
REPLY [6 votes]: The first answer is correct, in this case you're counting unordered sets, however in your solution when you write selecting $B$ and then $C$ separately you're in a sense counting ordered pairs. What I mean by this is consider choosing $3$ Aces($A$), a king ($K$) and a queen ($Q$)as your hand. Then your algorithm is counting the hands $AAAKQ$ and $AAAQK$ as distinct hands when they should be considered the same since you counted your sample space using unordered sets. This also explains why your answer was exactly twice the correct one, it's because you counted every possibility exactly twice with the order flipped.<|endoftext|>
TITLE: Difference between $\mathbb{Q}[X]/(X-1) \otimes_\mathbb{Q} \mathbb{Q}[X]/(X+1)$ and $\mathbb{Q}[X]/(X-1)\otimes_{\mathbb{Q}[X]}\mathbb{Q}[X]/(X+1)$?
QUESTION [6 upvotes]: The original problem actually wants me to find which one is a zero module. But first, what is the difference between $\mathbb{Q}[X]/(X-1) \otimes_\mathbb{Q} \mathbb{Q}[X]/(X+1)$ and $\mathbb{Q}[X]/(X-1)\otimes_{\mathbb{Q}[X]}\mathbb{Q}[X]/(X+1)$? I am new to the concept of "tensor product" and I am having trouble understanding this.
According to the definition, we regard $\mathbb{Q}[X]/(X-1)$ as a module over $\mathbb{Q}$ in the first one, and regard it as a module over $\mathbb{Q}[X]$ in the second one. But how does that make a difference?
More specifically, since $\gcd(X-1,X+1)=1$ in $\mathbb{Q}[X]$, so $\mathbb{Q}[X]/(X-1)\otimes_{\mathbb{Q}[X]}\mathbb{Q}[X]/(X+1)$ is a zero module, because there exist $f,g\in\mathbb{Q}[X]$ such that $(X-1)f + (X+1)g = 1$. As a result, for any $r\otimes s=1\cdot (r\otimes s)=\left((X-1)f + (X+1)g\right)\cdot(r\otimes s)=(X-1)(fr\otimes s)+(X+1)(gr\otimes s) = 0$
But I don't know how to deal with the other one.
REPLY [10 votes]: What you're missing is that the ring you're tensoring over affects the scalars you can slide from left to right in the tensor product.
$\def\Q{\mathbb{Q}}$
In fact, the first of your tensor products is $\mathbb{Q}$ and the second is zero. We'll show the second one first (your proof also works and is the better proof because it generalizes, but I want to give a different proof that emphasizes the "sliding"). Let $f$ be arbitrary in $\mathbb{Q}[x]/(x-1)$. Then $xf = f$. For $g$ arbitrary in $\mathbb{Q}[x]/(x+1)$, one has $xg = -g$.
So given $f \otimes g \in \Q[x]/(x-1) \otimes_{\Q[x]} \Q[x]/(x+1)$, we have $f \otimes g = (xf) \otimes g = f \otimes (xg) = -(f\otimes g)$ so $f \otimes g = 0$; since every element of the tensor product is a sum of "simple tensors" (those of the form $f \otimes g$), the tensor product is zero.
For the other example, there is a $\Q$-module isomorphism $\mathbb{Q}[x]/(x+1) = \mathbb{Q}$ given by evaluating a polynomial at $-1$, and similarly for the other module, so the tensor product is $\Q \otimes_\Q \Q$ which is naturally isomorphic to $\mathbb{Q}$.
Soapbox: you'll get lots of lectures about how the best way to think about a tensor product is in terms of its universal property, which is true, but the practical way of thinking about it as "sums of symbols $a \otimes b$ where you can slide an r from the left to the right" is also very important and is easier to learn at first. Note that a lot of beginners forget the "sums" there, which is important.<|endoftext|>
TITLE: Show that there's no continuous function that takes each of its values $f(x)$ exactly twice.
QUESTION [17 upvotes]: I need to prove the following:
There's no continuous function $f:[a,b]\to \mathbb{R}$ that takes each of its values $f(x)$, $x\in [a,b]$ exactly twice.
First of all, I didn't understand the question. For example $x^2$ takes $1$ twice, in the interval $[-1,1]$. Is it saying that it does not occur for all $x$ in the interval? But what about $f(x) = c$? Is it saying that it does not occur only exactly $2$ times, then? I have no idea about how to prove it. I know that for $f(x)$ such that $f(a)0$ such that in the intervals $[c-\delta, c), (c,c+\delta)$ (and if $d$ is not extreme of $[a,b]$, $[d-\delta, d]$) the function takes values that are less than $f(c) = f(d)$. Let $A$ be the greatest of the numbers $f(c-\delta), f(c+\delta), f(d-\delta)$. By the intermediate value theorem, there are $x\in [c-\delta, c), y\in (c, c+\delta]$ and $z\in [d-\delta, d)$ such that $f(x)=f(y)=f(z)=A$. Contradiction.
Well, why the last part? Why is it that I can apply the intermediate value theorem to these values? For example, $
TITLE: Simple characterization of integers among abelian groups
QUESTION [5 upvotes]: This is part of an early exercise in Freyd's abelian categories. Let $\mathscr{G}$ be the category of abelian groups. The group of integers is distinguished, up to isomorphism, by the facts that:
For every $A\in\mathscr{G}$ that is not a zero object, Hom$(\mathbb{Z},A)$ has more than one element.
If $f:\mathbb{Z}\rightarrow\mathbb{Z}$ is such that $f^2 = f$, then either $f$ is the identity or it is the zero map.
I am trying to prove that if $A\in\mathscr{G}$ satisfies these two properties, then it is isomorphic to $\mathbb{Z}$. From condition (1), I get two nontrivial maps $\alpha: A\rightarrow\mathbb{Z}$ and $\zeta:\mathbb{Z}\rightarrow A$. The composition $\alpha\circ\zeta:\mathbb{Z}\rightarrow\mathbb{Z}$ is also nontrivial so is necessarily an embedding hence $\zeta$ is injective. Next, since the image of $\alpha$ is a subgroup of $\mathbb{Z}$, it is cyclic, generated by $n$, say. Post-composing $\alpha$ with the map $p\mapsto\frac{p}{n}$ allows me to assume that $\alpha$ surjects.
At this point I'm stuck. I need to use property (2) but I do not see how.
REPLY [3 votes]: Let $G$ be an abelian group satisfying the above two properties. Then there is a nonzero homomorphism $\phi: G \rightarrow \mathbb{Z}$. The image of $G$ is a nonzero subgroup of $\mathbb{Z}$, say $n\mathbb{Z}$ for $n \geq 1$. So as you say, we can choose to be $\phi$ surjective. So there is an exact sequence of abelian groups $$0 \rightarrow \textrm{Ker } \phi \rightarrow G \xrightarrow{\phi} \mathbb{Z} \rightarrow 0$$ Now $\mathbb{Z}$ is a free $\mathbb{Z}$-module, so any exact sequence with $\mathbb{Z}$ on the right splits. So there is a homomorphism $\iota: \mathbb{Z} \rightarrow G$ such that $\phi \circ \iota = 1_{\mathbb{Z}}$. But then $$(\iota \circ \phi)^2 = \iota \circ \phi \circ \iota \circ \phi = \iota \circ 1_{\mathbb{Z}} \circ \phi = \iota \circ \phi$$ so either $\iota \circ \phi = 1_G$ or $\iota \circ \phi = 0$. But we can't have $\iota \circ \phi = 0$.<|endoftext|>
TITLE: Properties of a finite field extension of degree 2.
QUESTION [7 upvotes]: I am bad (but trying to improve!) at very basic number theory and algebra. I'm quite sure this question is easy, but I do not know what fundamentals I am missing. This is from Ireland & Rosen's "Classical Introduction to Modern Number Theory" and is question 10 of Chapter 7. I have copied it exactly.
Let $K\supset F$ be finite fields and $[K:F]=2$. For $\beta\in K$ show that $\beta^{1+q}\in F$ and moreover that every element in $F$ is of the form $\beta^{1+q}$ for some $\beta\in K$.
The question uses the context of the previous one, in which $|F|=q$.
What I've tried so far:
For the first part, it seems there are two cases: i) $\beta\in F$ or ii) $\beta\in K\backslash F$. In i) it is easy, since $\beta^q=\beta$ and the extension is of degree $2$, $\beta^{q+1}=\beta^2\in F$. In ii), I suppose the minimal polynomial of $\beta$ in $K[x]$ is $x^2-\beta^2=x^2-\beta^{q+1}$... but where to go from here?
And I can't even begin to see what to do with the second part of the problem. Is any of this right so far? What do I do next if so? Thanks so much, ya'll. I'll go try to answer a question I can help with in the meantime.
REPLY [6 votes]: We have $x^q=x$ for all $x\in F$ and $x^{q^2}=x$ for all $x\in K$. Since $q$ is a power of $p$ and $p$ is characteristic of $K$, hence $\sigma\colon y\mapsto y^q$ is an automorphism of the field $K$. Note that $\sigma$ fixes all the points of $F$, and it can not fix any point of $K\setminus F$ (otherwise, the polynomial $X^q-X$ will have more than $q$ roots).
Now, $\sigma(\beta^{q+1})=\beta^{q(q+1)}=\beta^{q^2}\beta^{q}=\beta\beta^q=\beta^{q+1}$, thus $\beta^{q+1}$ is a fixed point of $\sigma$, it must be inside $F$, thats what you expected.<|endoftext|>
TITLE: Find the roots of $e^x+e^{1/x} + a = 0$
QUESTION [5 upvotes]: Find the roots of this equation
$e^x + e^{1/x} + a = 0$
where $a \in \Bbb R$
Is there any nice formula for this type of equation?
REPLY [5 votes]: An actual analytical solution will not happen here. An analytical approximation can be given as follows.
For large enough negative $a$, there are two roots, each reciprocals of each other. One is large and positive, the other is small and positive. The large and positive one can be approximated:
$$x=\ln(-a-e^{1/x})=\ln(-a)+\ln(1+e^{1/x}/a) \approx \ln(-a)+e^{1/x}/a \approx \ln(-a)+1/a+1/(ax).$$
In the first step we Taylor expanded the logarithm, committing an error on the order of $(e^{1/x}/a)^2$. In the second step we Taylor expanded the exponential, committing an error on the order of $(1/x)^2/a$.
Running with this, we get a quadratic equation, and the solution which is consistent with the approximations we just made is
$$\frac{\ln(-a)+1/a+\sqrt{(\ln(-a)+1/a)^2+4/a}}{2}.$$
The square root can be linearly approximated to give
$$\ln(-a)+1/a+\frac{1}{a\ln(-a)+1}.$$
This is a pretty good approximation, judging by numerical evidence. Heuristically it should be a pretty good approximation, because above we made errors on the order of at most $1/(a \ln(-a)^2)$ on the right side, then we multiplied both sides by $x$ which is on the order of $\ln(-a)$. So the overall error in the solution to the quadratic should be on the order of $1/(a \ln(-a))$. The error in the last step turns out to be smaller than the others, so the overall error in the final approximation should be on the order of $1/(a \ln(-a))$. (In particular, the last term is not really "significant", according to these error estimates; in terms of order of errors, we would've been just as good without including the $1/(ax)$ term. Still, that term actually does reduce the error a little bit in numerical tests.)
Higher order approximations will become quite cumbersome, if we proceed by this approach. But still, it seems that you can readily achieve convergence of numerical methods by beginning with this asymptotic as your initial guess.<|endoftext|>
TITLE: Show that $\int_0^1 \left(\left\lfloor\frac{\alpha}{x}\right\rfloor-\alpha\left\lfloor\frac{1}{x}\right\rfloor\right)\mathrm dx=\alpha \ln\alpha$
QUESTION [21 upvotes]: Show that the improper integral $\int_0^1 \left(\left\lfloor\frac{\alpha}{x}\right\rfloor-\alpha\left\lfloor\frac{1}{x}\right\rfloor\right)\mathrm dx=\alpha \ln\alpha$, for $\alpha\in(0,1)$.
This is an integral of Riemann. My work:
The set of discontinuities of the integral is
$$D=\left\{\frac1k:k\in\Bbb N\right\}\cup\left\{\frac{\alpha}{k}:k\in\Bbb N\right\}$$
And when we have that $x>\alpha$ the integral can be simplified to
$$\int_0^1 \left(\left\lfloor\frac{\alpha}{x}\right\rfloor-\alpha\left\lfloor\frac{1}{x}\right\rfloor\right)\mathrm dx=\int_0^\alpha \left(\left\lfloor\frac{\alpha}{x}\right\rfloor-\alpha\left\lfloor\frac{1}{x}\right\rfloor\right)\mathrm dx-\alpha\int_{\alpha}^1 \left\lfloor\frac{1}{x}\right\rfloor\mathrm dx$$
I dont know how to continue from here, it is not clear how to handle the partition $D$ to simplify the integral. What I did here is just see what is the value of $\int_{\alpha}^1 \left\lfloor\frac{1}{x}\right\rfloor\mathrm dx$ to see if I get some clue.
If there is no weird mistake somewhere:
$$\int_{\alpha}^1 \left\lfloor\frac{1}{x}\right\rfloor\mathrm dx=\int_\alpha^{\frac1{\left\lfloor 1/\alpha\right\rfloor}}\frac{\mathbf 1_{\Bbb N}(1/\alpha)\mathrm dx}{\lfloor 1/\alpha\rfloor}+\sum_{k=1}^{\lfloor1/\alpha\rfloor}\int_{\frac1{k+1}}^{\frac1k}\frac{\mathrm dx}{k}=\\=\mathbf 1_{\Bbb N}(1/\alpha)\frac{1-\alpha\lfloor 1/\alpha\rfloor}{\lfloor 1/\alpha\rfloor^2}+\sum_{k=1}^{\lfloor1/\alpha\rfloor}\frac1{k^2(k+1)}$$
what is not useful at all. So I get stuck with this problem, can you help me to show this identity (not going deeper than a Riemann integral background)? Thank you in advance.
REPLY [9 votes]: I would follow a different route:
First let us change variables $x\rightarrow1/y$ so the problem is equally stated as
$$
I(a)=\int_1^{\infty}dy\frac{1}{y^2}\left(\left\lfloor a y\right\rfloor- a\left\lfloor y\right\rfloor\right)
$$
there is a well known Fourier expansion of the floor function which leads to
$$
I(a)=\frac{1}{\pi}\int_1^{\infty}dy\frac{1}{y^2}\left(\sum_{k\geq1}\frac{\sin(2\pi k a y)-a\sin(2\pi k y)}{k}+\pi\frac{1-a}{2}\right)
$$
by setting $ay=q$ int the first part of this integral we night simplify
$$
I(a)=-\frac{a}{\pi}\int_{1/a}^1dq\frac{1}{q^2}\sum_{k\geq 1} \frac{\sin(2 \pi k q)}{k}+\frac{1-a}{2}
$$
Since the sum is sinply $\Im\sum_{k\geq 1} \frac{e^{2 i \pi k q}}{k}=\frac{\pi}{2}-\pi q$ we are left with trivial integrations
$$
I(a)=-a\int_{1/a}^1dq\left(\frac{1}{2q^2}-\frac{1}{q}\right)+\frac{1-a}{2}
$$
or
$$
I(a)=a\log(a)
$$
A (simple) proof of the aforementioned Fourier expansion might be found here<|endoftext|>
TITLE: Topology, closure definition - well defined?
QUESTION [7 upvotes]: I came upon the following definition for closure,
Given a subset of a topological space $X$, the closure of $A$ is defined as the intersection of all closed sets containing $A$.
How is this definition well-defined? We need to know if the expression,
$$\bar{A} := \bigcap_{U \in S} U $$
where $S$ is the collection of closed sets containing $A$,
exists, and is unique?
(We do know $S$ is nonempty as $X \in S$. But why does this guarantee existence of intersection? More generally, is the intersection of the elements of any nonempty collection of sets well-defined?) I think I am now a bit confused on the fundamentals of sets.
May someone explain? Thank you so much.
EDIT:
Thank you so much for all the replies. I have now read some basic set theory from Enderton's text. Here is my attempt to prove from scratch (Which is pretty much the same as the comments). Please do tell me if any part is incorrect.
The three axioms which I use:
Power Set Axiom (PSA) $$\forall a, \exists B \forall x ( x \in B \Leftrightarrow x \subseteq a)$$
Exstensionality Axiom (EA) $$\forall A \forall B [ \forall x (x \in A \Leftrightarrow x \in B ) \Rightarrow A = B ] $$
Axiom of Separation (AoS) For each formula $f(x)$ not containing $B$, the following is an axiom: $$ \forall t_1 \ldots \forall t_k
\forall c \exists B \forall x ( x \in B \Leftrightarrow x \in c, f(x))
$$
Well-defineness: Given a Topological Space $(X, \mathcal{T}_X)$ and $A \subseteq X$. Firstly, $\mathcal{T}_X$ is a well-defined set. This is because, by the PSA, set
$\mathcal{P}(X)$ exists and is unique by EA.
So by the AoS, $\mathcal{T}_X : = \{ x \in \mathcal {P}(X) : f(x) \}$
(where $f(x)$ is a formula of $x$ for its openess) exists, and is unique by EA.
Define the collection of closed sets which contains $A$ by, $$ \mathcal{C} := \{ x
\in \mathcal{P}(X) : x^c \in \mathcal{T}_X, A \subseteq x \}$$ which exists by AoS
and is unique by EA. Also, $\mathcal{C}$ is nonempty ($X$ is in
the set) so the set $$ \bigcap \mathcal{C} := \{ x \in X : \forall y
\in \mathcal{C}, x \in y \} = \bar{A}$$ again exists by AoS, and is
unique by EA, so the closure is well-defined.
REPLY [5 votes]: If $A\in S$, then $\bigcap S \subseteq A$. Thus by the axiom of separation, $\bigcap S$ exists if $S\ne\emptyset$. Because it can be given as an explicit class builder:
$$\bigcap S=\{x\mid\forall y\in S,x\in y\},$$
it is unique. Thus the intersection of any nonempty collection of sets is a well-defined set. (The intersection of the empty set is also well-defined, but equals the universe, $\bigcap\emptyset=V$, which is not a set.) In topology, though, we want more than that: we want to know that the closure is closed, which follows from one of the axioms of a topology - any arbitrary intersection of a nonempty collection of closed sets is closed.<|endoftext|>
TITLE: A scheme is affine iff the natural map $X\to \operatorname{Spec}\Gamma(X)$ is an isomorphism
QUESTION [6 upvotes]: We know that the functor $\operatorname{Spec}: \mathsf{Rings}^{\text{op}}\to \mathsf{Schemes}$ is right adjoint to the global section functor $\Gamma: \mathsf{Schemes}\to \mathsf{Rings}^{\text{op}}$. So there is a bijection
$$
\operatorname{Hom}(X, \operatorname{Spec} A) \to \operatorname{Hom}(A, \Gamma(X))
$$
which is natural in both variables. So if $X$ is a scheme, then we can set $A=\Gamma(X)$ and we get a bijection $\operatorname{Hom}(X, \operatorname{Spec} \Gamma(X)) \simeq \operatorname{Hom}(\Gamma(X), \Gamma(X))$. As a result, the identity map $\Gamma(X)\to \Gamma(X)$ gives rise to a natural map $X\to\operatorname{Spec}(\Gamma(X))$. My question is this:
If $X$ is affine, why does it follow that the natural map $X\to \operatorname{Spec}(\Gamma(X))$ is an isomorphism?
The proof probably uses some formal naturality properties of adjoints, but nobody usually bothers to explicitly write this stuff down! So I am also tagging (category-theory) to attract experts from there. I would very much appreciate it if someone could give a complete explanation.
Motivation. This result is pretty useful. For example, we need this fact to conclude that $X=\mathbb{A}^2-\{0, 0\}$ is not affine. Indeed, the standard proof goes by showing $\Gamma(X)=\mathbb{C}[x,y]$, but then the natural map $X\to \operatorname{Spec}(\Gamma(X))=\mathbb{A}^2$ is not an isomorphism, so $X$ is not affine.
REPLY [3 votes]: $\DeclareMathOperator{\Spec}{Spec}$
Let $X = \Spec A$ for some ring $A$. Then $\Gamma(X)=A$ by definition of the structure sheaf on $X$. Applying $\Spec$ to the identity map $\Gamma(X)\rightarrow A$ yields an isomorphism of affine schemes
$$
f\colon X = \Spec A \longrightarrow \Spec \Gamma(X),
$$
since $\Gamma(X)\rightarrow A$ is an isomorphism and $\Spec$ is functorial. Also note that $f$ is indeed the canonical morphism $X\rightarrow \Spec \Gamma(X)$.<|endoftext|>
TITLE: What's the name of this Powerful lemma?
QUESTION [7 upvotes]: I have read a book,post this
Lemma: Let $p$ be a prime then if $p-1 | k $ one has
$$\sum_{i=1}^{p} i^k \equiv -1 \pmod p$$
and if $p-1 \not | k$ then one has
$$\sum_{i=1}^{p} i^k \equiv 0 \pmod p$$
I can understand this lemma proof:
proof :
The first assertion is obvious as each term will contribute $1$ except for $p^k$ which is $0$ and for the second, let the sum be $S$ and note that $g^kS \equiv S \pmod p$ where $g$ is a primitive root $\pmod p$. Since $g^k \not \equiv 1 \pmod p$ then one must have $S \equiv 0 \pmod p$.By done!
Question:I fell this lemma is very powefull,and have other methods to prove this lemma?and what's the name of the lemma?or have some Application to paper?
REPLY [3 votes]: With high probability, this lemma doesn't have a name attributed to some author. One may simply call it a "congruence for integer power sums".
See the 2010 article by Kieren MacMillan and Jonathan Sondow, Proofs of Power Sum and Binomial Coefficient Congruences Via Pascal’s Identity , which is exclusively devoted to this lemma and still doesn't name if after somebody.
In this article, there are references to many alternative methods of proof, e.g. relying on the theory of primitive roots, or invoking Lagrange’s theorem, or employing Bernoulli numbers and finite differences.
Even more interestingly, this article presents a new elementary proof, using an identity for power sums proven by Pascal in 1654. This shows that technically it was possible that the lemma was known to Pascal or other people 350 years ago.
Further, the article also gives numerous references to applications, e.g. to prove theorems on Bernoulli numbers (Staudt-Clausen, Carlitz-von Staudt, Almkvist-Meurman) and to study the Erdös-Moser Diophantine equation as well as other exponential Diophantine equations and Stirling numbers of the second kind.
Also other standard volumes do not give the lemma a name, e.g. the classic textbook by G. H. Hardy and E. M. Wright,
An Introduction to the Theory of Numbers
, where it appears as theorem 119. This again supports that the lemma most likely does not have a name.
Also in here, the application is given to prove von Staudt's theorem.<|endoftext|>
TITLE: Is the Entropy a Function or a Functional?
QUESTION [8 upvotes]: As in the title, I was wondering whether the entropy of a system (it can be any entropy, from Boltzmann to Renyi etc, it is of no importance) is a function or a functional and why? Since it is mostly defined as:
$$S(p)=\sum_{i}g(p_i) $$
for some $g$ that has to be continous etc then it has to be a functional. But then I see that $S_{BG}$ for example, which is defined as $S_{BG}=\sum_i p_i \log p_i$ just needs the value of each $p_i$ in order to be defined, right?
The way I see it, it has to be a functional but it is not clear to me why. Also many authors mention the entropy as a function while others call it a functional.
Thank you!
REPLY [4 votes]: A function is a mapping between a set of numbers and another set of numbers. A functional is a mapping between a set of functions and another set of functions.
The entropy is defined as the Gibbs functional:
$$S(p)=-k\sum_jp_j\log(p_j)$$ where the $p_j$ are functions. So the correct way to define the entropy, following Gibbs, is a functional<|endoftext|>
TITLE: Generalization of Jensen's inequality to multivariate functions
QUESTION [6 upvotes]: Is there a generalization of Jensen's inequality for convex multivariate functions? By convex, let's say $f$ is a multivariate function defined on the convex set $A$, and for all $x,y \in A$ and $\lambda \in [0,1]$,
$$f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda)f(y).$$
Then, letting $x_1,\ldots,x_n$ denote points in $A$, the result would be something to the effect of saying that for any $n$ points in $A$,
$$\frac{\sum_{i=1}^n f(x_i)}{n} \geq f \left(\frac{\sum_{i=1}^n{x_i}}{n} \right). $$
I do see a few articles that may be related:
Perlman, Michael D. "Jensen's inequality for a convex vector-valued function on an infinite-dimensional space." Journal of Multivariate Analysis 4.1 (1974): 52-65.
Merkle, Milan. "Jensen's inequality for multivariate medians." Journal of Mathematical Analysis and Applications 370.1 (2010): 258-269.
Aras-Gazic, G., et al. "GENERALIZATION OF JENSEN’S INEQUALITY BY HERMITE POLYNOMIALS AND RELATED RESULTS." Mathematical reports 17.2 (2015): 201-223.
Agnew, Robert A. "Multivariate version of a Jensen-type inequality." J. Inequal. in Pure and Appl. Math 6.4 (2005).
I do not think the first is particularly related if I'm interested in finite dimensional spaces, and my function is not vector-valued in any case. The second may be more related, but it seems to be generalizing in slightly different directions. The third is beyond my comprehension and the fourth, again, seems to be working on a slightly different generalization.
Are there no less technical generalizations of Jensen's to multivariate functions out there?
REPLY [3 votes]: Jensen's inequality always holds when you're dealing with a convex function whose domain is finite-dimensional. By restricting the domain of $f$ to $S := \text{span} \{x_1, \ldots, x_n\}$, we put the question into that setting and know that $f(\mathbb{E} X) \leq \mathbb{E} f(X)$ for any random vector $X$ taking values in $S$. In particular, it holds when the distribution of $X$ is uniform on $\{x_1, \ldots, x_n\}$.<|endoftext|>
TITLE: Why consider ramification only over number fields?
QUESTION [7 upvotes]: Is there a reason why one looks at ramification of prime ideals only over (rings of integers of) number fields? There surely are many more situations where one has rings with prime ideals.
REPLY [11 votes]: If $\pi:A\to B$ is any map of commutative rings whatsoever, there is a good notion of ramification. The examples mentioned so far (rings of integers of number fields or local fields, Riemann surfaces) assume that $\pi$ is an extension of Dedekind domains. In this setting, we start with a prime ideal of $\mathfrak{p}\subset A$ and extend it to $B$, then factor it uniquely as $\mathfrak{p}B=\prod\mathfrak{p}_i^{e_i}$. We then say that $\mathfrak{p}$ ramifies in $B$ if we have some $e_i>1$.
But unique factorization of ideals into primes is something specific to Dedekind domains. In the more general setting, the thing to do is take the module (sheaf) of differentials $\Omega_{B/A}$ on $B$; its support on $B$ (set of primes $\mathfrak{q}\subset B$ for which $(\Omega_{B/A})_{\mathfrak{q}}\neq0$) is the ramification locus of the map $\pi$, and the prime ideals $\pi^{-1}(\mathfrak{q})\subset A$ form the branch locus of $\pi$; these are the ramified primes from before. Because the formation of $\Omega_{B/A}$ commutes with localization, a prime ideal $\mathfrak{p}\subset A$ ramifies in $B$ if and only if $\Omega_{B_\mathfrak{p}/A_\mathfrak{p}}\neq0$. In the case of rings of integers of number fields, we recover the definition from the previous paragraph.
Geometrically, the ramification and branch loci correspond to the (closed) loci on the source and target where the map of schemes (varieties) $Spec(B)\to Spec(A)$ fails to be smooth, given additional assumptions (namely that $A\to B$ is flat and locally of finite presentation). In the case where $A,B$ are Dedekind domains and finitely generated $\mathbb{C}$-algebras (i.e. $Spec(A)$ and $Spec(B)$ are smooth complex curves), we recover the geometric picture of branching for Riemann surfaces.<|endoftext|>
TITLE: Find a polynomial with integer coefficients which has a global minimum equal to (a)$- \sqrt{2}$, (b)$\sqrt{2}$
QUESTION [5 upvotes]: Find a polynomial with integer coefficients which has a global minimum equal to (a)$- \sqrt{2}$, (b)$\sqrt{2}$.
It it a high-school math contest problem. The answer is given:
$$(a) ~~~~~~~P(x)=N(2x^2-1)^2-2x^3+3x,~~~~N>1$$
$$(b) ~~~~~~~Q(x)=P(x^2)=N(2x^4-1)^2-2x^6+3x^2$$
No solution is given.
My initial approach was (before I saw the answer) to try finding the minimun of a general polynomial starting with degree $3$ and then try to match the coefficients to a given value of the minimum. However, the answer clearly shows that we require degrees $4$ and $8$ respectively, which makes my method of solution practically impossilbe.
It's easy enough to show that the answers are true, for example the case (a):
$$P'(x)=8N(2x^2-1)x-6x^2+3=(2x^2-1)(8Nx-2)=0$$
$$x_1=\frac{1}{\sqrt{2}},~~~~~~~x_2=-\frac{1}{\sqrt{2}},~~~~~~~x_3=\frac{1}{4N}$$
$$P''(x)=8N(2x^2-1)+4x(8Nx-2)$$
$$P''(x_1)=2\sqrt{2}(4N\sqrt{2}-2)>0$$
$$P''(x_2)=-2\sqrt{2}(-4N\sqrt{2}-2)>0$$
$$P''(x_3)=\frac{1}{N}-8N<0$$
So, $x_1$ and $x_2$ are minimum points, $x_3$ is a maximum point.
$$P(x_1)=\frac{3\sqrt{2}}{2}-\frac{\sqrt{2}}{2}=\sqrt{2}$$
$$P(x_2)=-\frac{3\sqrt{2}}{2}+\frac{\sqrt{2}}{2}=-\sqrt{2}$$
The case (b) follows trivially, if we replace $x \to x^2$.
How is this problem supposed to be solved? How are we supposed to find the degree of $P(x)$ and match the coefficients? Is there some theorem about a minimal value of a polynomial which might help?
REPLY [2 votes]: For part (a), it can be easily deduced that $f$ does not have degree 2. Further, $f$ cannot have degree 3, as an odd degree polynomial has no global minimum. So assume
$$f(x) = sx^4 + bx^3 + cx^2 + dx + e$$
with $f(a) = -\sqrt{2}, f'(a) = 0$. As one last piece of guesswork, let $a = k\sqrt{2}$. Then we have
$$4sk^4 + 2bk^3\sqrt{2}+2ck^2+dk\sqrt{2}+e = -\sqrt{2}$$
$$8sk^3\sqrt{2}+6bk^2+2ck\sqrt{2}+d=0$$
from which we derive
$$4sk^4 + 2ck^2 + e = 0$$
$$2bk^3 + dk = -1$$
$$8sk^3 + 2ck = 0$$
$$6bk^2 + d = 0$$
Then $c = -4sk^2, e = 4sk^4, d = -1.5/k, b = 0.25/k^3$. Now we just need to find $k, s$ that makes all of these integers. $k = 1/2, s = 4$ does the trick. Then we have
$$f(x) = 4x^4 + 2x^3 - 4x^2 - 3x + 1$$
Which has all of our desired properties.<|endoftext|>
TITLE: Stable resolution of a $2\times2$ linear system
QUESTION [7 upvotes]: Cramer's method for the resolution of linear systems is known to be unstable, even in the $2\times2$ case. For general systems, stability can be improved by partial or full pivoting.
When you transpose the full pivoting principle to a $2\times2$, the procedure essentially amounts to
finding the LHS coefficient with the largest magnitude, let it be $a_{11}$ WLOG;
computing $x_2$ by determinants*,
computing $x_1$ by elimination of $x_2$ from equation $1$.
Can this improve stability ? Is there a more stable solution ?
*Whatever the choice of the pivot, the formula amounts to a ratio of $2\times2$ determinants. I wonder if first normalizing the pivot coefficient to $1$ makes any difference.
$$\frac{b_1-b_2\dfrac{a_{21}}{a_{11}}}{a_{22}-a_{12}\dfrac{a_{21}}{a_{11}}}\text{ vs. }\frac{b_1a_{11}-b_2a_{21}}{a_{11}a_{22}-a_{12}a_{21}}$$
REPLY [3 votes]: If your system has a fused multiply add instruction, then most determinants $$x = ab-bc$$ can be evaluated accurately using Kahan's method
$$\hat{w} = \text{fl}(bc), \quad e = \text{fl}(\hat{w} - bc), \quad \hat{f} = \text{fl}(ad-\hat{w}), \quad \hat{x} = \text{fl}(\hat{f}+e).$$
Here $\text{fl}(x)$ denotes the floating point representation of $x$. In the absence of underflow or overflow, we have the relative error bound $$|x - \hat{x}| \leq 2 u |x|.$$
Here $u$ is the unit roundoff. This is far better than we have any right to expect. Baring floating point exceptions you can solve a 2 by 2 linear system with a componentwise forward relative error which is at most $$\gamma_4 = \frac{4u}{1 - 4u}.$$
The proof of these and related statements are contained in the paper.
Further analysis of Kahan's algorithm for the accurate computation of 2-by-2 determinants
If your system does not have fused multiply add instruction, then I would try to combine the regular TwoProduct and TwoSum algorithms, see the paper
Error-free transformations in real and complex floating point arithmetic
for a description of the these and other algorithms based on error-free transformations of basic arithmetic operations.
In truth, I have not completed the analysis, but I would be very surprised if this idea did not work. If I stumble, then I would fall back on the paper
Accurate sum and dot product
and treat the determinants as very short inner products.
Finally, the battle against overflow or underflow can be won by extracting and manipulating the exponents on your own.<|endoftext|>
TITLE: Existence of the natural density of the strictly-increasing sequence of positive integer?
QUESTION [6 upvotes]: Let $A=\{a_n\}$ is a strictly-increasing sequence of positive integer. The natural density of this sequence is defined by $\delta(A)=\lim_{n\rightarrow \infty} \frac{A(n)}{n}$ whenever the limit exists and where $A(n)$ is the number of elements of $A$ not exceeding $n$. Is there a strictly-increasing sequence of positive integer $A=\{a_n\}$ such that $\delta(A)$ does not exists?
REPLY [15 votes]: Consider the sequence of integers with an odd number of decimal digits.
REPLY [9 votes]: Yes. Start with $1$. Then omit enough integers to reduce the ratio below $\frac{1}2$. Then include enough consecutive integers to increase the ratio above $1-\frac{1}3$. Then omit enough to reduce it below $\frac{1}4$. Then include enough to increase it above $1-\frac{1}5$. Keep going.<|endoftext|>
TITLE: A topology on the set of lines?
QUESTION [17 upvotes]: Of course any set $X$ can have a topology, but are there more natural topologies, metrics or similar on the set of straight lines in $\mathbb R^2$?
REPLY [4 votes]: Adding to the existing answers - which are quite correct - let me give yet another way of visualizing the space of lines.
Let's first look at the lines in $\mathbb{R}^2$ not passing through the origin; call this set $L_+$. An element $l$ of $L_+$ is specified by an element $\alpha_l$ of the punctured plane $\mathbb{R}^2\setminus\{(0, 0)\}$: namely, $\alpha_l$ is the point on $l$ closest to the origin. To put it another way, draw a line from the origin to $\alpha_l$; the perpendicular to this line, through $\alpha_l$, is $l$. It's easy to check that this is a homeomorphism between $L_+$ (with the subspace topology) and $\mathbb{R}^2\setminus\{(0, 0)\}$.
Now, what happens at the origin? Well, a line through the origin is specified by its angle: specifically, by a point on $S^1$ . . .
. . . Except that this winds up double-counting lines. E.g. the $x$-axis can be specified by either $(0, 1)$ or $(1, 0)$. So really, the set of lines through the origin looks like $S^1$ with opposite points identified - that is, the projective space $\mathbb{RP}^1$. (This is of course homeomorphic to $S^1$, but that's peculiar to the number $1$.)
So what description does this give of the space of all lines look like? Well, it looks like the plane $\mathbb{R}^2$, with the origin replaced by a copy of the projective space $\mathbb{RP}^1$. Making this precise is a bit tricky, but a good exercise. And this turns out to be a kind of degenerate example of a crucial construction in algebraic geometry - the blowup (see https://en.wikipedia.org/wiki/Blowing_up).
Incidentally, it's noting that - while this description of the space of lines is correct, and has a number of nice properties - it is misleading in certain ways. In particular, the description sounds non-homogeneous (the origin seems like a special point), while it is clear from thinking about lines in the plane that the space of lines has no distinguished points (and this is clearer from the other descriptions).<|endoftext|>
TITLE: Fundamental group of a compact space with compact universal covering space
QUESTION [6 upvotes]: I have this problem for Riemannian manifold, but think that it is just a topological problem. I know that this is probably a silly question, but it is since a while that I don't study general topology and algebraic topology..
Let $X$ be a compact topological space and assume that its universal covering space $\tilde{X}$ is also compact. How can I prove that the fundamental group of $X$ is finite?
Thanks!
REPLY [10 votes]: Consider the covering map $\pi \colon \tilde{X} \rightarrow X$. Above any $p \in X$, the fiber $\pi^{-1}(p)$ is a discrete closed subset of a compact space $\tilde{X}$ and so must be finite. By the general theory of covering spaces, if we fix some $\tilde{p} \in \tilde{X}$ with $\pi(\tilde{p}) = p$ then we obtain a bijection between $\pi_1(X,p)$ and $\pi^{-1}(p)$ given by sending (the homotopy class of) a based loop $\gamma \colon [0,1] \rightarrow X$ at $p$ to $\tilde{\gamma}(1)$ where $\tilde{\gamma} \colon [0,1] \rightarrow X$ is the unique lift of $\gamma$ to $\tilde{X}$ satisfying $\tilde{\gamma}(0) = \tilde{p}$. Thus, $\pi_1(X,p)$ is finite.
REPLY [3 votes]: The action of $\pi_1(X)$ on $\tilde X$ is proper and free. Suppose that $\pi_1(X)$ is infinite, consider $(f_n)$ pairwise distinct elements of $\pi_1(X)$, for $x\in \tilde X$ the sequence $f_n(x)$ has an accumulation point. Contradiction since the action of $\pi_1(X)$ on $\tilde X$ is proper.<|endoftext|>
TITLE: Cohomology of a group of order two with coefficients in a finite abelian group of odd order
QUESTION [8 upvotes]: I am looking for an elementary proof that the cohomology groups in the title are trivial in the positive degrees.
In more detain, let $G=\{1,s\}$ be a group of order two, and let $A$ be an abelian group with an action of $G$.
I need an elementary proof that if $A$ is finite of odd order, then
$H^1(G,A)=0$ and $H^2(G,A)=0$.
A non-elementary proof goes as follows. Since $|G|=2$, both $H^1(G,A)$ and $H^2(G,A)$ are killed by the multiplication by $2$. Clearly they are also killed by the multiplication by $|A|$. Since $2$ and $|A|$ are coprime, these cohomology groups are killed by the multiplication by $1$, hence they both are trivial.
I give an elementary formulation of the desired assertion.
Let $A$ be an abelian group and let $s\colon A\to A$ be an automorphism such that $s^2=1$.
Set
$$ N=s+1, \qquad T=s-1. $$
Then $TS=0$ and $ST=0$,
hence
$$ \mathrm{im\ } N\subseteq \ker T\quad\text{and}\quad \mathrm{im\ } T\subseteq \ker N. $$
I need an elementary proof of the following assertion:
Theorem. If $A$ is a finite abelian group of odd order, then $\ker T=\mathrm{im\ } N$ and $\ker N=\mathrm{im\ } T$.
Motivation. Let $X$ be a quasiprojective variety with additional structure over ${\mathbb{C}}$.
Write $G=\mathrm{Gal}({\mathbb{C}}/{\mathbb{R}})$, then $G=\{1,s\}$, where $s$ is the complex conjugation.
Let $sX$ denote the variety with additional structure over ${\mathbb{C}}$
obtained from $X$ by the action of the complex conjugation $s$ on the coefficients of the equations defining $X$.
Assume that $sX$ is isomorphic to $X$.
We would like to know whether $X$ admits a real form.
Let $A=\mathrm{Aut}(X)$, and assume that $A$ is an abelian group.
Then one can construct an obstruction $\eta(X)\in H^2(G,A)$ to the existence of a real form of $X$,
see my question.
Now assume that $A$ is a finite abelian group of odd order.
Then by our theorem we have $H^2(G,A)=0$, hence $\eta(X)=0$ and therefore, $X$ admits a real form $X_{\mathbb{R}}$.
The set of isomorphism classes of such real forms is a principal homogeneous space of the abelian group $H^1(G,A)$,
see again my question.
By our theorem we have $H^1(G,A)=0$, hence this real form is unique.
Motivation for an elementary proof: A potential reader of my paper comes from analysis and would prefer an elementary proof.
REPLY [8 votes]: Suppose that $x \in \ker T$, so $s(x)=x$. Since $x$ has odd order, there exists a positive integer $n$ (in fact $n= (o(x)+1)/2$) with $2nx=x$. Then, since $s(nx)=nx$, we have $x = s(nx) + nx$, so $x \in {\rm im}\ N$.
$\ker N \subseteq {\rm im\ } T$ follows by a similar argument, or by comparing orders.<|endoftext|>
TITLE: Find all rational numbers $\frac p q$ such that $0 < p < q$ are relatively prime and $pq=25!$
QUESTION [7 upvotes]: Find all the rational numbers $\frac p q$ such that all the below three conditions are satisfied.
$$0<\frac{p}q<1,$$
$$p \hspace{4 mm} \text{and} \hspace{5 mm}q \hspace{5 mm}\text{are relatively prime, and}$$
$$pq=25!$$
My try
What i feel that answer should be $\sum_{r=1}^{9}\binom{9}{r}$
I have arranged the 9 primes between 1 to 25.
but i guess not correct.
i still think that i am doing some silly mistake. Can you please provide me the answer ?
REPLY [4 votes]: Observation 1: $25! = 2^{22} \cdot 3^{10} \cdot 5^6 \cdot 7^3 \cdot 11^2 \cdot 13 \cdot 17 \cdot 19 \cdot 23$
Observation 2: $m$ and $n$ are comprised of and only of these factors. Also, if a prime factor divides $m$, it cannot divide $n$. Hence, the factors must occur in clusters, in which all primes are together.
The problem thus reduces to partitioning the 9 clusters $ 2^{22}, 3^{10} , 5^6 , 7^3 , 11^2 , 13 , 17 , 19 , 23$ into $m$ and $n$.
For any cluster, there are two choices, it could either be in $m$ or in $n$. So, there are $2^9$ ways to split them.
How do we eliminate the cases where $m > n$? Well, for any pair $(m, n)$, if $m > n$, switching them obtains a pair $(n, m)$, where $n < m$. Hence there are half as many ways to split the numbers in the way we desire.
That'd be $\frac{2^9}{2} = \boxed{2^8}$ ways.<|endoftext|>
TITLE: The equivalence of the two definitions of fractional Laplacian
QUESTION [8 upvotes]: Using the Fourier transform we can easily define the fractional Laplacian by
$$(-\Delta)^{s/2}f(x)=(|\xi|^s\hat f(\xi))^\vee(x), \ \ f\in C_0^\infty. $$
However, I learned that there is another definition using the principal value of singular integral
$$(-\Delta)^{s/2}f(x)=C_{n,s}P.V.\int_{\mathbb{R}^n}{\frac{f(x)-f(y)}{|x-y|^{n+s}}dy}, \ \ 0
TITLE: How many digits of the googol-th prime can we calculate (or were calculated)?
QUESTION [15 upvotes]: Here, a lower and upper bound for the $n$-th prime are given.
Applying the given bounds
$$n(\ln(n\cdot\ln(n))-1) k \left(\log(k) + \log(\log(k)) - 1 + \frac{\log(\log(k))-2.1}{\log k}\right)$$ for $k \ge 3$ which yields $2.3471221 \cdot 10^{102} < p_{10^{100}} < 2.3471265 \cdot 10^{102}$, so six digits are determined.
EDIT 2: Thanks to DanaJ, I see that this 2013 paper by Axler gives the following bounds:
$p_k < k \left(\log(k) + \log(\log(k)) - 1 + \frac{\log(\log(k))-2}{\log k}\right) - \frac{(\log(\log(k)))^2 - 6 \log(\log(k)) + 11.847}{(\log (k))^2}$ for $k \ge 2$ and
$p_k > k \left(\log(k) + \log(\log(k)) - 1 + \frac{\log(\log(k))-2}{\log k}\right) - \frac{(\log(\log(k)))^2 - 6 \log(\log(k)) + 10.273}{(\log (k))^2}$ for $k \ge 8009824$
which yields $2.347125652 \cdot 10^{102} < p_{10^{100}} < 2.347125801 \cdot 10^{102}$, determining the first seven digits.
Note that, while this paper gives the better asymptotic bound
$|\pi(x) - \mathrm{li}(x)| < 0.2795\frac{x}{(\log (x))^{3/4}}\exp(-\sqrt{\frac{\log (x)}{6.455}})$ for $x \ge 229$
it only determines the first three digits of $p_{10^{100}}$.
Of course, if we assume the Riemann Hypothesis we can get many more digits. The bound
$|\pi(x) - \mathrm{li}(x)| < \frac{\sqrt{x} \log(x)}{8\pi}$ for $x \ge 2657$
will give
$$2.347125735865764178036135909936302071965422425975\cdot10^{102}
TITLE: How was the integral for Zeta Function created
QUESTION [11 upvotes]: How was the zeta function integrated from
$$\zeta(s) = \sum_{n=1}^{\infty}\frac{1}{n^{s}}$$
To
$$\zeta(s) = \frac{1}{\Gamma (s)}\int_{0}^{\infty}\frac{x^{s-1}}{e^{x}-1}dx$$
I've tried googling this and surprisingly can't find much on it, even the wikipedia article for zeta function doesn't explain how this integral is derived or cite a source anywhere. I have no doubt it's true; I am just curious how it was obtained. I know very little about converting infinite sums to integrals.
REPLY [21 votes]: Instead of using the dominated or monotone convergence theorem, I like to prove it by elementary means.
For $Re(s) > 0$ and $n > 0$ (change of variable $y = nx$) : $$\Gamma(s)n^{-s} = \int_0^\infty x^{s-1} e^{-nx}dx$$
So that for $Re(s) > 0$ (using the geometric series) :
$$\Gamma(s) \sum_{n=1}^N n^{-s} = \int_0^\infty x^{s-1} \sum_{n=1}^N e^{-nx}dx = \int_0^\infty x^{s-1}\frac{1-e^{-Nx}}{e^{x}-1} dx$$
For $Re(s) > 1$ it is known that
$$\zeta(s) = \lim_{N \to \infty} \sum_{n=1}^N n^{-s}$$
Finally, we need to prove that (again for $Re(s) > 1$) :
$$\lim_{N \to \infty}\int_0^\infty x^{s-1}\frac{1-e^{-Nx}}{e^{x}-1} dx = \int_0^\infty \frac{x^{s-1}}{e^{x}-1} dx$$
which is obvious once we showed that for $x >0$ : $\displaystyle\left|\frac{x}{e^{x}-1}\right| < 1$ whence $\displaystyle\int_0^\infty \frac{x^{s-1}}{e^x-1}e^{-Nx}dx$ converges absolutely and $\to 0$ as $N \to \infty$
Overall, for $Re(s) > 1$ :
$$\Gamma(s) \zeta(s) = \lim_{N \to \infty} \Gamma(s) \sum_{n=1}^N n^{-s} = \lim_{N \to \infty} \int_0^\infty x^{s-1} \frac{1-e^{-Nx}}{e^x-1}dx = \int_0^\infty \frac{x^{s-1}}{e^{x}-1} dx$$<|endoftext|>
TITLE: If the entries of a positive semidefinite matrix shrink individually, will the operator norm always decrease?
QUESTION [11 upvotes]: Given a positive semidefinite matrix $P$, if we scale down its entries individually, will its operator norm always decrease? Put it another way:
Suppose $P\in M_n(\mathbb R)$ is positive semidefinite and $B\in M_n(\mathbb R)$ is a $[0,1]$-matrix, i.e. $B$ has all entries between $0$ and $1$ (note: $B$ is not necessarily symmetric). Let $\|\cdot\|_2$ denotes the operator norm (i.e. the largest singular value). Is it always true that
$$\|P\|_2\ge\|P\circ B\|_2?\tag{$\ast$}$$
Background. I ran into this inequality in another question. Having done a numerical experiment, I believed the inequality is true, but I hadn't been able to prove it. If $(\ast)$ turns out to be true, we immediately obtain the analogous inequality $\rho(P)\ge\rho(P\circ B)$ for the spectral radii because $\rho(P)=\|P\|_2\ge\|P\circ B\|_2\ge\rho(P\circ B)$.
Remarks.
There is much research on inequalities about spectral radii or operator norms of Hadamard products. Often, either all multiplicands in each product are semidefinite or all of them are nonnegative. Inequalities like those two here, which involve mixtures of semidefinite matrices with nonnegative matrices, are rarely seen.
I have tested the inequality for $n=2,3,4,5$ with 100,000 random examples for each $n$. No counterexamples were found. The semidefiniteness condition is essential. If it is removed, counterexamples with symmetric $P$s can be easily obtained. The inequality is known to be true if $P$ is also entrywise nonnegative. So, if you want to carry out a numerical experiment to verify $(\ast)$, make sure that the $P$s you generate have both positive and negative entries.
One difficulty I met in constructing a proof is that I couldn't make use of the submultiplicativity of the operator norm. Note that tie occurs if $B$ is the all-one matrix, which has spectral norm $n\,(>1)$. If you somehow manage to extract a factor like $\|B\|_2$ from $\|P\circ B\|_2$, that factor may be too large. For a similar reason, the triangle inequality also looks useless.
REPLY [11 votes]: It is not always true.
The following matrix is positive semidefinite with norm $3$:
$$
P := \left(\begin{array}{ccc}
2 & 1 & 1\\
1 & 2 & -1\\
1 & -1 & 2\\
\end{array}\right)
$$
Use $B$ to poke out the $-1$'s and you get
$$
P \circ B = \left(\begin{array}{ccc}
2 & 1 & 1\\
1 & 2 & 0\\
1 & 0 & 2\\
\end{array}\right),
$$
which is positive semidefinite with norm $2 + \sqrt{2} > 3$.<|endoftext|>
TITLE: If you take the reciprocal in an inequality, would it change the $>/< $ signs?
QUESTION [15 upvotes]: Example:$$-16<\frac{1}{x}-\frac{1}{4}<16$$
In the example above, if you take the reciprocal of $$\frac{1}{x}-\frac{1}{4} = \frac{x}{1}-\frac{4}{1}$$
would that flip the $<$ to $>$ or not?
In another words, if you take the reciprocal of $$-16<\frac{1}{x}-\frac{1}{4}<16$$ would it be like this: $$\frac{1}{-16}>\frac{x}{1}-\frac{4}{1}>\frac{1}{16}$$
REPLY [2 votes]: Here are some ideas for how reciprocals work.
A different kind of reciprocal
From a picture point of view, reciprocals have to do with "turning fractions upside down"— like turning $\frac{a}{b}$ into $\frac{b}{a}$.
From your point of view, which is reasonable, turning $\frac{a}{b} + \frac{c}{d}$ into $\frac{b}{a} + \frac{d}{c}$ makes a certain kind of reciprocal.
But in most algebra classes, people use the term reciprocal to talk about starting with a term like $x$ and replacing it with one-divided-by-$x$. Because $\frac{1}{\frac{a}{b} + \frac{c}{d}}$ is not the same as $\frac{b}{a} + \frac{d}{c}$, your way of talking about reciprocals and the way people talk about reciprocals in algebra classes are different.
You can take reciprocals using only multiplication and division
In algebra, there are rules for what you can do to an equation or inequality to make sure it stays true. For example, you can add the same number to both sides of the equation, or you can multiply both sides of the equation by the same number (except zero).
You can take the reciprocal of an equation just by performing several multiplication and division operations in a row. Here's why:
If you have an equation written like: $$\frac{a}{b} = \frac{c}{d},$$
you can turn it into its reciprocal equation by taking the following steps:
You can multiply both sides by the denominators $b$ and $d$:
$$\frac{a\times b \times d}{b} = \frac{c\times b \times d}{d}$$
Some factors cancel, leaving you with:
$$a \times d = c \times b.$$
You can divide both sides by $a$ and $c$ (the numerators from the original equation), giving you:
$$\frac{a \times d}{a \times c} = \frac{c\times b}{a \times c}.$$
Some factors cancel, leaving you with
$$\frac{d}{c} = \frac{b}{a}.$$
which is the reciprocal of your original equation! Because all we did was perform multiplication and division, we know that whenever the original equation is true $a/b = c/d$, the reciprocal equation is also true $b/a = d/c$.
If you had an inequality $\frac{a}{b} < \frac{c}{d}$, the steps for taking the reciprocal would be the same — but you'd additionally have to keep track of whether $a$,$b$,$c$,and $d$ were negative so you would know when to flip the $<$ sign.
You can't take the reciprocal of a sum by taking the sum of the reciprocals
In general, you can't start with a sum of fractions like $\frac{a}{b} + \frac{c}{d} = \frac{e}{f} + \frac{g}{h}$, and convert it into $\frac{b}{a} + \frac{d}{c} = \frac{f}{e} + \frac{h}{g}$ by following the rules of algebra for manipulating equations.
Of course, you already know that if you have $\frac{a}{b} + \frac{c}{d} = \frac{e}{f} + \frac{g}{h}$, you can safely write:
$$\frac{1}{\frac{a}{b} + \frac{c}{d}} = \frac{1}{\frac{e}{f} + \frac{g}{h}}$$
— but in general, $\frac{1}{\frac{a}{b} + \frac{c}{d}}$ will be a very different number from $\frac{b}{a} + \frac{d}{c}$.<|endoftext|>
TITLE: Understanding the tensor-hom adjunction intuitively
QUESTION [14 upvotes]: I'm currently trying to teach myself some category theory. Recently, I learned that the tensor product is left adjoint to the hom functor in suitable categories, e.g. vector spaces with linear maps, i.e. for vector spaces $U$, $V$, and $W$,
$$\mathrm{Hom}(U \otimes V, W) \cong \mathrm{Hom}(U, \mathrm{Hom}(V,W))$$
I am trying to gain some intuition for adjoints by looking at special cases. This one is puzzling me a bit. Here is one of my attempts to get a grasp of it:
If my understanding is correct, a (covariant) $k$-tensor $\omega$ on a vector space $V$ can be defined equivalently as either an element of the $k$-fold tensor product of the dual space $V^*$, i.e. $\omega \in \bigotimes_{i=1}^k V^*$, or as a $k$-form on $V$, i.e. a $k$-linear map $\omega: \bigoplus_{i=1}^k V \to \mathbb{R}$. Thus, we have a bijection between $\bigotimes_{i=1}^k V^*$ and the set of $k$-forms on $V$. Furthermore, if $V$ is finite dimensional, then we also have the well-known isomorphism $V^* \otimes V^* \cong \mathrm{Hom}(V,V^*)$.
It seems to me that these observations can be interpreted as special cases of the adjunction above, or are at least related in some way. In particular, we have
$$\mathrm{Hom}(V^* \otimes V^*, V) \cong \mathrm{Hom}(V^*, \mathrm{Hom}(V^*,V))$$
What is the proper way to interpret these facts in the language of category theory? Can this be generalized at all to help give more intuition for the adjunction in terms of the properties of tensor products?
Thank you!
REPLY [2 votes]: $\DeclareMathOperator{Hom}{Hom}$
I just want to give an explicit example, where I used this tensor-hom adjunction.
Say we have finite dimensional inner product space $(V,\langle,\rangle)$ and a subspace $W\subset V.$ Then we define orthogonal complement of $W$ as
$$W^\perp=\{v:(\forall w\in W)\langle v,w\rangle=0\}.$$
However I would like to see $W^\perp$ in more "algebraic" form.
So as you noticed, due to the adjunction, my inner product space $\langle,\rangle:V\otimes V\to\mathbb{R}$ can be seen as element of $\Hom(V,V^*).$ Let denote this as $L_{\langle,\rangle}:V\to V^*.$ Explicitly we have that
$$(L_{\langle,\rangle}(v_1))(v_2)=\langle v_1,v_2\rangle.$$
Now consider inclusion $i:W\hookrightarrow V$ and associated dual map $i^*:V^*\to W^*.$ Let us compute the kernel of composition $i^*\circ L_{\langle,\rangle}:V\to V^*\to W^*.$ We have that
$$\ker(i^*\circ L_{\langle,\rangle})=\{v:i^*(L_{\langle,\rangle}(v))=0\}=\{v:\forall(w\in W)L_{\langle,\rangle}(v)(w)=0\}=\{v:\forall(w\in W)\langle v,w\rangle=0\}=W^\perp.$$
Hence we presented $W^\perp$ as a kernel of some linear mapping.
Such approach also allows you to define non-degeneracy, reflexivity of bilinear mapping in the language of kernels and images. One can also show that two maps $F:A\to B, G:B\to A$ are adjoint if and only if $L_{\langle,\rangle_B}\circ F=G^*\circ L_{\langle,\rangle_A}.$
More compactly. tensor-hom adjuntion allows you to translate "set-theoretic" conditions into "algebraic" ones. It works both ways by the way.<|endoftext|>
TITLE: Relationship between $C_c^\infty(\Omega,\mathbb R^d)'$ and $H_0^1(\Omega,\mathbb R^d)'$
QUESTION [5 upvotes]: Let
$d\in\mathbb N$
$\Omega\subseteq\mathbb R^d$ be open
$\langle\;\cdot\;,\;\cdot\;\rangle$ denote the inner product on $L^2(\Omega,\mathbb R^d)$
$\mathcal D:=C_c^\infty(\Omega,\mathbb R^d)$ and $$H:=\overline{\mathcal D}^{\langle\;\cdot\;,\;\cdot\;\rangle_H}\color{blue}{=H_0^1(\Omega,\mathbb R^d)}$$ with $$\langle\phi,\psi\rangle_H:=\langle\phi,\psi\rangle+\sum_{i=1}^d\langle\nabla\phi_i,\nabla\psi_i\rangle\;\;\;\text{for }\phi,\psi\in\mathcal D$$
How are the topological dual spaces $\mathcal D'$ and $H'$ of $\mathcal D$ and $H$ related?
Let me share my thoughts and please correct me, if I'm wrong somewhere (and feel free to leave a comment, if everything is correct):
Let $f\in\mathcal D'$. If we equip $\mathcal D$ with the restriction $\left\|\;\cdot\;\right\|_{\mathcal D}$ of the norm induced by $\langle\;\cdot\;,\;\cdot\;\rangle_H$, then $f$ is a bounded, linear operator from $(\mathcal D,\left\|\;\cdot\;\right\|_{\mathcal D})$ to $\mathbb R$. Thus, since $\mathcal D$ is a dense subspace of $(H,\langle\;\cdot\;,\;\cdot\;\rangle_H)$, we can apply the bounded linear transform theorem and obtain the existence of a unique bounded, linear operator $F:(H,\langle\;\cdot\;,\;\cdot\;\rangle_H)\to\mathbb R$ (i.e. $F\in H'$) with $$\left.F\right|_{\mathcal D}=f\tag 1$$ and $$\left\|F\right\|_{H'}=\left\|f\right\|_{\mathcal D'}\tag 2$$ where $\left\|\;\cdot\;\right\|_{\mathcal D'}$ denotes the operator norm on $(\mathcal D,\left\|\;\cdot\;\right\|_{\mathcal D})'$.
On the other hand, if $F\in H'$ and $$f:=\left.F\right|_{\mathcal D}\;,$$ then we can show that $f\in\mathcal D'$ where $\mathcal D'$ is equipped with the usual topology. It's clear that $(2)$ is verified too.
Is there any mistake in my argumentation? And what's meant if $\nabla\pi$ with $\pi\in C_c^\infty(\Omega)'$ is claimed to be an element of $H$?
REPLY [2 votes]: I think your second argument is somewhat going into the wrong direction. You want to show that $H'\subset\mathcal{D}'$, so have to show two things:
1) Given $f\in H'$, you have $f\in\mathcal{D}'$, which follows since $\mathcal{D}$ is continuously embedded into $H$ (that you still have to show).
2) If two functionals $f,g\in H'$ coincide on $\mathcal{D}$, then they are equal. This follows since $\mathcal{D}\subset H$ is a dense subspace. Note that this does not mean that every functional on $\mathcal{D}$ has an extension to $H$. It only means if there is an extension, then it has to be unique, i.e., there is at most one extension.<|endoftext|>
TITLE: Is there any book similar to "Halmos Naive Set Theory" in Category Theory?
QUESTION [9 upvotes]: When I wanted to learn set theory in high school, I found Halmos Naive Set Theory book very readable and understandable. But now, at university, I have been searching for a similar book in category theory, but I haven't found any book like that until now.
What is your suggestion?
Thanks in advance.
REPLY [2 votes]: Leinster recently came out with a fantastic book
Basic Category Theory, Tom Leinster, 2014
It is shallow in terms of proofs but it is deep with examples. This really will help you understand category theory from whichever mathematics you may already know best.<|endoftext|>
TITLE: Limit of a summation involving fractional parts
QUESTION [5 upvotes]: Working with some problems on the floor function, I noticed that the sum
$$\frac {1}{n}\sum_{{\sqrt{n}}\leq x\leq n}\left\{\sqrt {x^2-n}\right\} $$
where $n$ and $x$ are integers, $\left\{f(x)\right\}$ denotes the fractional part of $f(x)$, and $n$ tends to $\infty$, seems to converge to $\approx 0.44...$. For example, for $n=10^6$, the sum gives $0.4414959...$. I would be interested to know whether there is a closed expression for this value.
I tried to solve this starting from commonly used formulas involving the floor function, but failed to prove it.
REPLY [4 votes]: It is easier to compute:
$$ L=\lim_{m\to +\infty}\frac{1}{m^2}\sum_{m\leq x\leq m^2}\left\{\sqrt{x^2-m^2}\right\}=\lim_{m\to +\infty}\frac{1}{m^2}\sum_{0\leq x\leq m^2-m}\left\{\sqrt{x^2+2mx}-(x+m)\right\}. \tag{1}$$
Obviously the argument of the last fractional part is always negative, and the real solution of $\sqrt{x^2+2mx}-(x+m)=-1$ is given by $x=\frac{(m-1)^2}{2}$ and the real solution of $\sqrt{x^2+2mx}-(x+m)=-k$ is given by $x_k=\frac{(m-k)^2}{2k}$. By Riemann sums, our limit equals the limit of $\frac{1}{m^2}$ times
$$ \int_{x_1}^{m^2-m}\left(\sqrt{x^2+2mx}-(x+m)+1\right)\,dx + \int_{x_2}^{x_1}\left(\sqrt{x^2+2mx}-(x+m)+2\right)\,dx+\ldots $$
as $m\to +\infty$, i.e.
$$\large\scriptstyle \lim_{m\to +\infty}\frac{1}{m^2}\left(\frac{m^2}{4} \left(1-2\log(2m)\right)+\frac{1}{16}+\frac{m^2-1}{2}+\sum_{k=1}^{m-1}(k+1)\left(\frac{(m-k)^2}{2k}-\frac{(m-k-1)^2}{2k+2}\right)\right)$$
that simplifies to:
$$\lim_{m\to +\infty}\frac{1}{m^2}\left(\frac{m^2}{4} \left(1-2\log(2m)\right)+\frac{1}{16}+\frac{m^2-1}{2}+\frac{2m^2 H_{m-1}-m^2-m+2}{4}\right)$$
and finally to:
$$ L=\color{red}{\frac{1+\gamma-\log 2}{2}}=0.44203424217\ldots $$
where $\gamma$ is the Euler-Mascheroni constant, due to the asymptotic formulas for harmonic numbers.<|endoftext|>
TITLE: precedence problem of multiple implication operators in logics
QUESTION [5 upvotes]: Should
a→b→c
be read as
(a→b)→c
or
a→(b→c)?
I used a online truth table generator (http://logic.stanford.edu/intrologic/secondary/applications/babbage.html) to test and got a→(b→c) is the correct one.
But on this article it says logician use (a→b)→c See:Boolean algebra operation precedence?
So I wondered in the field of logics, which would be the norm to read sentence with multiple implication operators such as a→b→c .
REPLY [3 votes]: In almost every context you will encounter $a \implies b \implies c$ it means
$$a \implies (b \implies c)$$
This is so common in constructive logic that it is effectively a universal standard. The fake reason for it is that
$$a_0 \implies (a_1 \implies (a_2 \implies (\dots \implies b)))$$
is propositionally equivalent to
$$(a_0 \land a_1 \land a_2 \dots) \implies b$$
which makes it a very easy convention to work with, since most theorems have a list of conditions and 1 conclusion. But the real reason for the convention comes from typed lambda calculus, which is the basis of constructive logic. Suppose you have a lambda expression
$$\lambda y. \lambda x. \lambda w. V$$
and you have a predicate $T(n)$ that represents "$n$ is of the appropriate type". Let
$A$ be $T(y)$
$B$ be $T(x)$
$C$ be $T(w)$
$V$ be $T(V)$
Then the statement that "$\lambda y. \lambda x. \lambda w. V$ is appropriately typed" is propositionally:
$$T(\lambda y. \lambda x. \lambda w. V) = (A \implies B \implies C \implies D)$$
if you associate the implication to the right as above. The similarity between that notation and functions from $A$ to $B$ as $F: A \to B$ is apparent. Since constructive logic is built on top of typed lambda calculus, the convention is preserved. You will probably never encounter anywhere in modern logical publication that doesn't use this convention.<|endoftext|>
TITLE: Adding fractions of Groups of People
QUESTION [5 upvotes]: I understand the rules of adding fractions perfectly well. I know how to find common denominators, and understand why adding fractions without common denominators doesn't make sense.
But, today someone asked me about adding $\frac{5}{6}$ and $\frac{21}{28}$. They were wondering why dividing the two factions and taking the average ($\frac{0.8333 + 0.75}{2} = 0.7917$) was giving them a different value than adding them together, and then dividing ($\frac{5}{6} + \frac{21}{28} = \frac{26}{34} = 0.7647$).
My initial reaction was the same as yours: I was taken aback by someone adding fractions in this way, and I gave a quick refresher of why this doesn't make sense, and how to properly add fractions.
I thought I had helped them fix their problem, until they gave more more context: The $\frac{5}{6}$ was five people in a group of six who were observed washing their hands after a certain activity. The $\frac{21}{28}$ was twenty-one out of twenty-eight people who were observed washing their hands after a certain activity. The goal was to find the total number of people who had washed their hands, as a fraction of the total number of observed people. So, $\frac{26}{34}$ is actually correct in this case.
But, I'm still having trouble reconciling this with what I know about fractions. At least, what I think I know. Is there a term for these sorts of fractions? This isn't like having five slices of a six-slice pizza combined with twenty-one slices of a twenty-eight-slice pizza. This is like having one oven holding six pizzas, five of which have mushrooms, and another (huge) oven holding twenty-eight pizzas, twenty-one of which have mushrooms. What fraction of the pizzas have mushrooms? Is it $\frac{5}{6} + \frac{21}{28}$ ? I don't think so.
Is there some terminology I'm forgetting here? Why am I getting so tripped up by this?
REPLY [2 votes]: There is such a thing as a weighted average. A weighted average of the two fractions, with weights proportional to the sizes of the groups, gives the right answer.
The sizes of the groups are $6$ and $28$, so the weights are $\dfrac 6 {6+28} \approx 0.1764706$ and $\dfrac{28}{6+28} \approx 0.8235294$.
The two fractions are $5/6\approx 0.83333$ and $21/28 = 3/4 = 0.75$. The weighted average is
\begin{align}
(\text{first weight}\times\text{first value}) & {} + (\text{second weight}\times\text{second value}) \\[10pt]
= \left( \frac 6 {34} \times \frac 5 6 \right) & {} + \left( \frac{28}{34}\times\frac {21}{28} \right).
\end{align}
The $6$s cancel and the $28$s cancel and you get
$$
\frac 5 {34} + \frac{21}{34} = \frac{26}{34}.
$$<|endoftext|>
TITLE: Average distance between two points on a unit square.
QUESTION [6 upvotes]: Consider the unit square $S =[0,1]\times[0,1]$. I'm interested in the average distance between random points in the square.
Let $ \mathbf{a} = \left< x_1,y_1 \right>$ and $ \mathbf{b} = \left< x_2,y_2 \right>$ be random points in the unit square. By random, I mean that $x_i$ and $y_i$ are uniformly distributed on $[0,1]$.
The normal approach is to use multiple integration to determine the average value of the distance between $\mathbf{b}$ and $\mathbf{a}$. I would like to try another approach.
$\mathbf{a}$ and $\mathbf{b}$ are random vectors, and each element has known distribution. So, the vector between them also has known distribution. The difference between two uniformly random variables has triangular distribution.
So $\mathbf{c} = \mathbf{b} - \mathbf{a}$. Then, the average distance is the expectation of $\lVert \mathbf{c} \rVert$. Perhaps it would be easier to calculate the expectation of $\lVert \mathbf{c} \rVert^2$.
In any case, I am not sure how to calculate the expectation for $\lVert \mathbf{c} \rVert^2$.
Can someone guide me in the right direction?
REPLY [3 votes]: This has very little to do with linear algebra. The average distance is given by the integral:
$$ I = \int_{(0,1)^4}\sqrt{(x-X)^2+(y-Y)^2}\,d\mu.$$
Given $x$ and $X$, uniformly distributed and independent over $[0,1]$, the probability density function of $(x-X)^2$ is supported on $[0,1]$ and given by $-1+\frac{1}{\sqrt{t}}$. It follows that the PDF of $(x-X)^2+(y-Y)^2$ is supported on $[0,2]$ and given by $\pi+t-4\sqrt{t}$ on the interval $[0,1]$ and by $-2-t+4\sqrt{t-1}+2\arcsin\left(\frac{2}{t}-1\right)$ on the interval $[1,2]$. That leads to:
$$ I = \int_{0}^{1}\sqrt{t}\left(\pi+t-4\sqrt{t}\right)\,dt+\int_{1}^{2}\sqrt{t}\left(-2-t+4\sqrt{t-1}+2\arcsin\left(\frac{2}{t}-1\right)\right)\,dt$$
that simplifies to:
$$ I = \color{red}{\frac{2+\sqrt{2}+5\,\text{arcsinh}(1)}{15}}=0.521405433\ldots$$
Convexity arguments are enough to prove that $\frac{1}{2}
TITLE: How do we prove that $\int_0^1 \ln x\left({1\over \ln{x}}+{1\over 1-x}\right)^2\,dx=\gamma-1?$
QUESTION [5 upvotes]: How do we prove that:
$$\int_{0}^{1}\ln{x}\left({1\over \ln{x}}+{1\over 1-x}\right)^2\, dx =\color{blue}{\gamma-1}?\tag1$$
The only idea came to mind was this series
$$\sum_{n=1}^{\infty}{1\over 2^k(1+x^{-1/2^k})}={x\over 1-x}-{1\over \ln{x}}\tag2$$
Or expanded $(1)$
$$\int_0^1 \left({1\over \ln x} + {2 \over 1-x}+{\ln x \over (1-x)^2} \right)\,dx=\gamma-1\tag3$$
$$\int_0^1 {\ln x \over (1-x)^2}\,dx=\sum_{n=0}^\infty (1+n)\int_0^1 x^n\ln x \,dx = \sum_{n=0}^\infty (1+n)\cdot{-1\over (1+n)^2}\tag4$$
But $(4)$ diverges!
$\int {1\over 1-x} \, dx=-\ln(1-x)$
$\int_0^1 {2\over 1-x} \, dx$ also diverges
$\int{1\over \ln x} dx = \ln(\ln x )+\ln x +{\ln^2 x\over 2\cdot2!}+{\ln^3 x \over 3\cdot 3!}+\cdots$
$\int_0^1 {1\over \ln x} \, dx$ diverges too.
How do we go about integrating $(1)$?
Help needed, thanks!
REPLY [8 votes]: Observe that $$ \int_{0}^{1}\left(\frac{1}{1-x}+\frac{\log\left(x\right)}{\left(1-x\right)^{2}}\right)dx\stackrel{x\rightarrow1-x}{=}\int_{0}^{1}\left(\frac{1}{x}+\frac{\log\left(1-x\right)}{x^{2}}\right)dx.$$ Fix $0
TITLE: Why is continuous differentiability important?
QUESTION [9 upvotes]: In calculus, I would presume that the notion of continuous differentiability is important, which is why we have classes $C^1, C^2,\ldots,C^n$ which are defined in terms of having a continuous $n$th derivative. But why? Why is the derivative being continuous relevant at all?
What is the motivation for defining $C^n$ in terms of not merely being $n$ times differentiable, but $n$ times continuously differentiable? For which (important) theorems in single and multivariable calculus is the hypothesis of continuous differentiability absolutely required?
It is not required for either the fundamental theorem of calculus or integration by substitution, though it is often presented as being such.
REPLY [4 votes]: One reason $C^1$ is important is its practicality. Namely, there is a theorem that if $f$ is $C^1$ on an open set $U$ then $f$ is differentiable at all points of $U$. It's usually pretty easy to check $C^1$: often you simply look at the form of the coordinate functions of $C^1$ and observe, from your knowledge of elementary calculus, that they are differentiable and their derivatives are continuous. And once you've completed that check, voila, you conclude that $f$ is differentiable.
$C^2$ is also practical, namely via the theorem on equality of mixed 2nd partials (and, similarly, $C^n$ implies equality of mixed $n$th partials).<|endoftext|>
TITLE: A very tricky pseudo-proof of $0=-1$ through series and integrals
QUESTION [11 upvotes]: Dealing with a recent question I spotted a very nice exercise for Calc-2 students, i.e. to find the mistake in the following lines.
Lemma 1. For any $n\in\mathbb{N}$, we have: $$ \int_{0}^{1} x^n\left(1+(n+1)\log x\right)\,dx = 0. $$
Lemma 2. For any $x\in(0,1)$ we have: $$ \frac{1}{1-x}=\sum_{n\geq 0}x^n,\qquad \frac{\log x}{(1-x)^2}=\sum_{n\geq 0}(n+1) x^n\log(x). $$
By Lemmas 1 and 2 it follows that:
$$\begin{eqnarray*}(\text{Lemma 1})\quad\;\;\color{red}{0}&=&\int_0^1 \sum_{n\geq0} x^n\left(1+(n+1)\log x\right)\,dx\\[0.2cm](\text{Lemma 2})\qquad&=&\int_0^1 \left(\frac{1}{1-x} + \frac{\log x}{(1-x)^2}\right)\,dx\\[0.2cm](x\mapsto 1-x)\qquad&=&\int_0^1 \left(\frac{1}{x}+\frac{\log(1-x)}{x^2}\right)\,dx\\[0.2cm](\text{Taylor series of }x+\log(1-x))\qquad&=&-\int_0^1 \frac{1}{x^2} \sum_{k\geq2}\frac{x^k}k \,dx\\[0.2cm](\text{termwise integration})\qquad&=&-\sum_{k\geq 2} \frac{1}{k(k-1)}\\[0.2cm](\text{telescopic series})\qquad&=&-\sum_{m\geq 1} \left(\frac{1}{m}-\frac{1}{m+1}\right)=\color{red}{-1}.
\end{eqnarray*}$$
Now the actual questions: were you able to locate the fatal flaw at first sight?
Do you think it is a well-suited exercise for Calculus-2 (or Calculus-X) students?
REPLY [4 votes]: The error is at the very beginning, in the interchange of integral and summation. I’m too rusty (and too lazy!) to go beyond checking that the sequence of functions fails to satisfy a standard sufficient condition for the interchange, but the other steps are legitimate, so that must be the sticking point.
In the U.S. calculus courses that I've taught or observed, this material would come in Calc. $2$, to the extent that it appeared at all, and the interchange of integral and summation wouldn't appear at all; that makes it an essentially impossible exercise. Moreover, most students in typical first-year calculus courses still have the notion that mathematics is algorithmic calculation; getting them to pay enough attention to details to understand why the non-existence of a zero of $\frac1x$ doesn’t contradict the intermediate value theorem, or even to remember that the sign of $x$ matters when multiplying an inequality $f(x)\le g(x)$ by $x$, is a non-trivial challenge, to the extent that the former often goes by the board.
This might be appropriate for a very good old-fashioned advanced calculus course; the undergraduate real analysis courses that I taught had a different emphasis and didn’t cover the necessary material.<|endoftext|>
TITLE: Is there a connection between the concepts of limits in ordinals, functions and categories?
QUESTION [6 upvotes]: In set theory there is the concept of a limit ordinal: Nonzero ordinals that are the supermum of all ordinals below them.
In functional analysis there are the concepts of limits of functions (and sequences) a value that the function comes arbitrarily close to at a point.
And in category theory there is a concept of a limit which is a universal cone.
Is there something common about all these ideas that justifies them all being called limits or is it a coincidence of language ?
REPLY [10 votes]: They are all special cases of limits in the category-theoretic sense.
Limit ordinals are a special case of least upper bounds in partially ordered sets. Given a partially ordered set $(X,\le)$, we may form a category whose objects are elements of $X$ where there is a single morphism from $x$ to $y$ whenever $x\le y$. Transitivity gives us composition and reflexivity gives us identity morphisms. In that case, the least upper bound of some subset $Y\subset X$ is precisely the limit of the diagram spanned by $Y$.
The limits of functions and sequences that we study in functional analysis and, more generally, in topology are in fact also a special case of least upper bounds in partially ordered sets, so they are also generalized by category-theoretic limits. If $X$ is a (topological) space, a filter on $X$ is a set $\mathcal F$ of subsets of $X$ such that:
$\emptyset\not\in\mathcal F$
If $Y\in\mathcal F$ and $Y\subset Z$ then $Z\in\mathcal F$
If $Y,Z\in\mathcal F$ then $Y\cap Z\in\mathcal F$
As an example, if $x\in X$ then the set of all neighbourhoods of $x$ (i.e., subsets of $X$ that contain some open neighbourhood of $x$) is a filter on $X$, called the neighbourhood filter $\mathcal N_x$. We say a filter $\mathcal F$ converges to $x$, and write $\mathcal F\to x$, if $\mathcal N_x\subset\mathcal F$.
What has this to do with convergence of sequences and functions? Well, suppose that $(x_n)$ is a sequence in $X$. Then we can define a filter $\mathcal S_{(x_n)}$ by:
$$
S_{(x_n)} = \left\{Y\subset X\;\colon\;\exists N \;.\;\textrm{if }n\ge N\textrm{ then }x_n\in Y\right\}
$$
the set of all subsets of $X$ that eventually contain every term of the sequence. You can check for yourself that $x_n\to x$ if and only if $\mathcal N_x\subset S_{(x_n)}$.
Limits of functions can be handled in a similar way. Now, given some space $X$, we may define a partially ordered set $F$ whose elements are the filters on $X$, ordered by inclusion. Let $\mathcal F$ be a filter whose limit we want to find. For example, we might have $\mathcal F=S_{(x_n)}$ for some sequence $(x_n)$. Given $x\in X$, define
$$
\mathcal L_{\mathcal F,x}=\left\{\mathcal G\in F\;\colon\; \mathcal G\subset\mathcal F, \mathcal G\to x\right\}
$$
Then $\mathcal F\to x$ if and only if $\mathcal F$ is the least upper bound in $F$ for $\mathcal L_{\mathcal F, x}$.<|endoftext|>
TITLE: What is this operator called?
QUESTION [28 upvotes]: If $x \cdot 2 = x + x$
and $x \cdot 3 = x + x + x$
and $x^2 = x \cdot x$
and $x^3 = x \cdot x \cdot x$
Is there an operator $\oplus$ such that:
$x \oplus 2 = x^x$
and $x \oplus 3 = {x^{x^x}}$?
Also, is there a name for such a set of operators ops where...
Ops(1) is addition
Ops(2) is multiplication
Ops(3) is exponentiation
Ops(4) is $\oplus$
...and so on
Also, is there a branch of math who actually deals with such questions? Have these questions already been answered like 2000 years ago?
REPLY [9 votes]: It is known as a tetration, and it is normally written as $^na$ where n is the height of the power tower. It is the forth hyperoperation.
The zeroth hyperoperation is the successor function, and the first is the zeroth hyperoperation iterated, and so on
A more general way to define the nth hyperoperation is, using the notation, $H_n(a,b)$ where n is the nth hyperoperation,
${\displaystyle H_{n}(a,b)={\begin{cases}b+1&{\text{if }}n=0\\a&{\text{if }}n=1{\text{ and }}b=0\\0&{\text{if }}n=2{\text{ and }}b=0\\1&{\text{if }}n\geq 3{\text{ and }}b=0\\H_{n-1}(a,H_{n}(a,b-1))&{\text{if }n\in\mathbb{N},n>3}\end{cases}}}$
Some notations for hyperoperations are(for $H_n(a,b)$:
Square bracket notation: $a[n]b$
Box notation: $a{\,{\begin{array}{|c|}\hline {\!n\!}\\\hline \end{array}}\,}b$
Nambiar's notation : $a\otimes ^{n-1}b$
Knuth's up arrow notation: $a\uparrow^{n-2}b$
Goodstien's notation: $G(a,b,n)$
Conway's chained arrow notation: $a\rightarrow b\rightarrow (n-2)$
Bowers exploding array function: $\{a,b,n,1\}$
Original Ackermann function: ${\begin{matrix}\phi (a,b,n-1)\ {\text{ for }}1\leq n\leq 3\\\phi (a,b-1,n-1)\ {\text{ for }}n\geq 4\end{matrix}}$<|endoftext|>
TITLE: How to compute the sine of huge numbers
QUESTION [18 upvotes]: For several days, I've been wondering how it would be possible to compute the sine of huge numbers like 100000! (radians). I obviously don't use double but cpp_rational from the boost multiprecision library. But I can't simply do 100000! mod 2pi and then use the builtin function sinl (I don't need more than 10 decimal digits..) as I'd need several million digits of pi to compute this accurately.
Is there any way to achieve this?
REPLY [14 votes]: I believe you may be able to calculate this without obscene numbers of digits of $\pi$ if you take advantage of the fact that these are factorials. To simplify the algebra, we can calculate $a_n=e^{i(n!)}$ instead (you want the imaginary part). Then $$a_{n+1}=e^{i(n!)(n+1)}=a_n^{n+1},$$ and it's perfectly reasonable to calculate $a_{100000}$ recursively with a high-precision library.
The downside is that to start the recursion you need a very good approximation of $e^i$, and I don't know if the error dependence works out any differently than in the $\pmod{2\pi}$ approach.
But to answer your actual question, Mathematica doesn't even break a sweat with the mere million digits needed for this:
> Block[{$MaxExtraPrecision = 1000000}, N[Sin[100000!], 10]]
-0.9282669319
takes about 15 ms on my computer.
For calculating the sine or cosine of a large arbitrary precision real number $x$, the gains of this method (which are tuned for $\sin n$ for integer $n$) are mostly lost, so I would recommend your original idea of reducing the argument $\bmod 2\pi$. As has been noted, the main bottleneck is a high-precision estimation of $\pi$. Your answer will be useless unless you can at least calculate $\frac{x}{\pi}$ to within $1$ (otherwise you may as well answer "somewhere between $-1$ and $1$"), so you need at least $\log_2(x/\pi^2)$ bits of precision for $\pi$. With $x\approx100000!$, that's about $1516701$ bits or $456572$ digits. Add to this the number $a$ of bits of precision you want in the result, so about $1516734$ digits of $\pi$ to calculate $33$ bits ($\approx 10$ digits) of $\sin x$ in the range $x\approx 100000!$.
Once you have an integer $n$ such that $y=2\pi n$ is close to $x$ (ideally $|x-2\pi n|\le2\pi$, it doesn't have to be perfectly rounded), calculate $\pi$ to precision $a+\log_2(n)$, so that $y$ is known to precision $a$, and then $x-y$ is precision $a$ and $\sin x=\sin (x-y)$ can be calculated to precision $a$ as well.<|endoftext|>
TITLE: Evaluate $\sum_ {n=1}^{\infty} \cot^{-1}(2n^2)$
QUESTION [5 upvotes]: I was trying to solve up this equation but couldn't move ahead.
$$\sum_ {n=1}^{\infty} \cot^{-1}(2n^2)$$
I wrote the expression as $$\sum_ {n=1}^{\infty} \tan^{-1}\left( \frac{1}{2n^2}\right)$$
I wanted to change the expression into such a form such that it can take up the form of $\tan^{-1}A-\tan^{-1}B$ so that all the terms except the second one get cancelled up but I am unable to think of any manipulation through which I can get the thing done.
Can anybody give me a hint on how to go ahead?
REPLY [5 votes]: To recognize a telescopic sum as Behrouz did is the key for a simple proof.
I will go for the overkill. We have
$$ \sum_{n\geq 1}\arctan\frac{1}{2n^2}\leq \sum_{n\geq 1}\frac{1}{2n^2}=\frac{\pi^2}{12}<1\tag{1}$$
hence:
$$ \sum_{n\geq 1}\arctan\frac{1}{2n^2}=\text{Arg}\prod_{n\geq 1}\left(1+\frac{i}{2n^2}\right) \tag{2} $$
and since the Weierstrass product for the $\sinh$ function gives
$$ \prod_{n\geq 1}\left(1+\frac{z^2}{n^2}\right)=\frac{\sinh(z\pi)}{z\pi}\tag{3} $$
we have:
$$\sum_{n\geq 1}\arctan\frac{1}{2n^2}=\text{Arg}\left(\frac{\cosh\frac{\pi}{2}}{\pi}(1+i)\right)=\text{Arg}(1+i)=\color{red}{\frac{\pi}{4}}.\tag{4}$$
The advantage of this approach is that it computes
$$ \sum_{n\geq 1}\arctan\frac{1}{n^2}=\frac{\pi}{4}-\arctan\left(\frac{\tanh\frac{\pi}{\sqrt{2}}}{\tan\frac{\pi}{\sqrt{2}}}\right)\tag{5} $$
(and similar ones) too, where it is not easy at all to write the main of the LHS as a telescopic term.<|endoftext|>
TITLE: Spectrum in functional-analysis and algebraic geometry
QUESTION [7 upvotes]: Why do we use the notion "spectrum" both in functional-analysis and in algebraic geometry? Are there any analogies?
REPLY [2 votes]: The main difference is that in Functional Analysis the "Spectrum" is the family of maximal ideals of a ring, while in Algebraic Geometry, as Grothendieck defined it, the Spectrum $Spec(A)$ of a commutative ring with unit, is defined as the space (topoogical space with the natural Zariski topology) whose points are the prime ideals of the ring. In particular, since all maximal ideals are prime, but not viceversa, there are in the $Spec(A)$ (a la Grothendieck), points that are not closed, i.e., all points represented by those ideal which are primes, but not maximal. Whereas the spectrum in Functional Analysis, all points are closed, making the topology of Gorthendieck's Spectrum $Spec(A)$ much more interesting than the topology of the Gelfand spectrum in Functional Analysis.
It was actually one of the greatest insights of Alexander Grothendieck, to realize that a good definition of the Spectrum in Algebraic Geometry had to enclose all the prime ideals and not just maximal ideals as in Functional Analysis, and so it would have been quite more general. It is important to understand that Grothendieck started his career precisely in Functional Analysis.<|endoftext|>
TITLE: Prove a matrix expression leads to an invertible matrix?
QUESTION [5 upvotes]: I want to prove matrix $C$ is invertible:
$$C=I-A^TB(B^TB)^{-1}B^TA(A^TA)^{-1},$$
where $I$ is an identity matrix of appropriate dimensions, and $(A^TA)^{-1}$ and $(B^TB)^{-1}$ imply both $A$ and $B$ have linearly independent columns.
I tried to achieve my goal by proving $\det(C)\neq0$ but got stuck.
REPLY [2 votes]: Given $\mathrm A \in \mathbb R^{m \times n}$ and $\mathrm B \in \mathbb R^{m \times p}$, both having full column rank, we define $\mathrm C \in \mathbb R^{n \times n}$ as follows
$$\mathrm C := \mathrm I_n - \mathrm A^T \mathrm B (\mathrm B^T \mathrm B)^{-1} \mathrm B^T \mathrm A (\mathrm A^T \mathrm A)^{-1}$$
Using the Weinstein-Aronszajn determinant identity,
$$\begin{array}{rl} \det (\mathrm C) &= \det (\mathrm I_n - \mathrm A^T \mathrm B (\mathrm B^T \mathrm B)^{-1} \mathrm B^T \mathrm A (\mathrm A^T \mathrm A)^{-1})\\\\ &= \det (\mathrm I_m - \mathrm A (\mathrm A^T \mathrm A)^{-1} \mathrm A^T \mathrm B (\mathrm B^T \mathrm B)^{-1} \mathrm B^T)\end{array}$$
Note that
$$\mathrm P_{\mathrm A} := \mathrm A (\mathrm A^T \mathrm A)^{-1} \mathrm A^T \qquad \qquad \qquad \mathrm P_{\mathrm B} := \mathrm B (\mathrm B^T \mathrm B)^{-1} \mathrm B^T$$
are the $m \times m$ projection matrices that project onto the column spaces of $\mathrm A$ and $\mathrm B$, respectively.
Hence,
$$\det (\mathrm C) = \det (\mathrm I_m - \mathrm P_{\mathrm A} \mathrm P_{\mathrm B})$$
and, thus, if $\mathrm I_m - \mathrm P_{\mathrm A} \mathrm P_{\mathrm B}$ is invertible, so is $\mathrm C$.<|endoftext|>
TITLE: whats the proof for $\lim_{x → 0} [(a_1^x + a_2^x + .....+ a_n^x)/n]^{1/x} = (a_1.a_2....a_n)^{1/n}$
QUESTION [9 upvotes]: This equation is directly given in my book and I am don't know anything about its proof.I tried L'Hospital rule by differentiating the both numerator as well as denominator(division rule), but the result is still coming in indeterminate forms.I am a beginner , and haven't practiced limits that much. This formula is really confusing me.
REPLY [4 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,\mathrm{Li}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\color{#f00}{%
\lim_{x \to 0}\bracks{%
\pars{a_{1}^{x} + a_{2}^{x} + \cdots + a_{n}^{x} \over n}^{1/x}}} =
\exp\pars{\lim_{x \to 0}\bracks{{1 \over x}\,
\ln\pars{a_{1}^{x} + a_{2}^{x} + \cdots + a_{n}^{x} \over n}}}
\\[3mm] = &\
\exp\pars{\lim_{x \to 0}{\bracks{%
a_{1}^{x}\ln\pars{a_{1}} + a_{2}^{x}\ln\pars{a_{2}} + \cdots + a_{n}^{x}\ln\pars{a_{n}}}/n \over
\bracks{a_{1}^{x} + a_{2}^{x} + \cdots + a_{n}^{x}}/n}}\qquad
\pars{~\mbox{L'H}\mathrm{\hat{o}\mbox{pital Rule}}~}
\\[3mm] = &\
\exp\pars{\ln\pars{a_{1}} + \ln\pars{a_{2}} + \cdots +\ln\pars{a_{n}} \over n} =
\color{#f00}{\pars{a_{1}a_{2}\ldots a_{n}}^{1/n}}
\end{align}<|endoftext|>
TITLE: Pointwise convergence of Fourier series in two dimensions
QUESTION [7 upvotes]: By Carleson's Theorem, we know that for every $f\in L^2(\mathbb{T})$
$$ f(x)=\lim_{N\rightarrow\infty}\sum_{k=-N}^N\hat{f}(k)e^{2\pi ikx}\;\text{ a.e.} $$
Suppose now that $f\in L^2(\mathbb{T}^2)$. Using Carleson's Theorem, which would be the easiest way to prove
$$ f(x,y)=\lim_{N\rightarrow\infty}\sum_{k,l=-N}^N\hat{f}(k,l)e^{2\pi i(kx+ly)}\;\text{ a.e.}? $$
Is it also true that
$$ f(x,y)=\lim_{M,N\rightarrow\infty}\sum_{|k|\leq M,\,|l|\leq N}\hat{f}(k,l)e^{2\pi i(kx+ly)}\;\text{ a.e.}? $$
(by $\lim_{M,N\rightarrow\infty}$ I mean the limit of a double sequence: given $\{a_{m,n}\}\subseteq\mathbb{C}$, we say that $\lim_{m,n\rightarrow\infty}a_{m,n}=L$ if for all $\epsilon>0$ there exists an $N_{\epsilon}\in\mathbb{N}$ such that $|L-a_{m,n}|<\epsilon$ for every $m,n\geq N_{\epsilon}$).
REPLY [7 votes]: Both of your questions were answered in short papers by Charles Fefferman:
C. Fefferman. On the convergence of multiple Fourier series. Bull. Amer. Math. Soc. 77 (1971), 744-745.
C. Fefferman. On the divergence of multiple Fourier series. Bull. Amer. Math. Soc. 77 (1971), 191-195.
In the second one he shows that the answer to your second question is no. The counterexample is very simple: consider $f(x,y)=e^{i\lambda xy}$ for large $\lambda$ and suppose that the linearizing functions associated to the supremum are $N(x,y)=\lambda x$ and $M(x,y)=\lambda y$ (you can safely ignore that these are not integers). Then one can show an estimate from below by $C\log(\lambda)$. Now let $\lambda\to\infty$, contradiction.
Details in the paper.
In the first one he answers your first question. It is a 2-page proof and uses only Carleson's theorem in one dimension. The key here is that there is still only a single scaling parameter $N$. Maybe it is reasonable to believe that the bound for the Carleson maximal operator associated to the rectangles $[-N,N]\times [-N,N]=N[-1,1]^2$ can be deduced from the appropriate bounds with $N [-1,1]\times (-\infty,\infty)$ and $N(-\infty,\infty)\times [-1,1]$ which each follow from Carleson's one-dimensional theorem by simply fixing the second (or first) variable. Again, see the paper for a way to make this idea precise.
Alternatively, it is also possible to essentially repeat the one-dimensional proof of Carleson's theorem in the two-dimensional (and higher-dimensional) setting. This is way more involved and there are some technical complications, but it has some benefits (e.g. it allows you to obtain more precise information for $f$ very close to $L^1$). To this end, the original reference is
P. Sjölin. On the convergence almost everywhere of certain singular integrals and multiple Fourier series. Ark. Mat. 9 (1971).
This work was essentially redone using modern time-frequency analysis tools in
M. Pramanik, E. Terwilleger. A weak $L^2$ estimate for a maximal dyadic sum operator on $\mathbb{R}^n$. Illinois J. Math. Volume 47, Number 3 (2003), 775-813.<|endoftext|>
TITLE: Calculate miter points of stroked vectors in Cartesian plane
QUESTION [5 upvotes]: I have two vectors CA and CB which I 'stroked' with lines of width a and b. I need to calculate D and E points to draw miter joint between two stroked vectors.
What I know is:
A point coordinates
B point coordinates
C point coordinates
β angle
a length
b length
What I'm looking are coordinates of points D and E. I need to find universal formula that lets me to calculate those points at any β. (please see pic.1 below)
I can calculate those points if use stroke with same length for both vectors (a == b). I'm doing it by reflecting point C over vector AC with distance 0.5*a. Then I have right angle triangle, on which γ angle at point C equals 90° - (0.5 * β) angle. Therefore I have all three angles for the triangle and length of CF (half of a) which lets my to calculate coordinates of point D. (please see pic.2 below). I use triangle CGE to calculate E coordinates in the same way as above.
My problems start when I need to use different width for the vector's stroke (a != b) (please see pic.3 below). In that case when I draw CFD triangle I cannot calculate γ angle as it is not 90° - (0.5 * β) anymore and I have no idea how to calculate D and E coordinates. Can someone point me in the right direction how to find γ angle or if there is any other (better) way to calculate coordinates of D and E?
REPLY [5 votes]: Draw two vectors $\vec u$ and $\vec v$ as in the picture below. They form a parallelogram having $a/2$ and $b/2$ as altitudes. It follows that
$$
u={b\over 2\sin\beta},\quad v={a\over 2\sin\beta}.
$$
It is then easy to compute $\vec u$ and $\vec v$:
$$
\vec u={A-B\over AB}{b\over 2\sin\beta},\quad
\vec v={C-B\over BC}{a\over 2\sin\beta}
$$
and finally:
$$
D=B+\vec u+\vec v,\quad E=B-\vec u-\vec v.
$$<|endoftext|>
TITLE: What benefits do real numbers bring to the theory of rational numbers?
QUESTION [6 upvotes]: Complex numbers make it easier to find real solutions of real polynomial equations. Algebraic topology makes it easier to prove theorems of (very) elementary topology (e.g. the invariance of domain theorem).
In that sense, what are theorems purely about rational numbers whose proofs are greatly helped by the introduction of real numbers?
By "purely" I mean: not about Cauchy sequences, Dedekind cuts, etc. of rational numbers. (This is of course a meta-mathematical statement and therefore imprecise by nature.)
"No, there is no such thing, because..." would also be a valuable answer.
REPLY [2 votes]: Real numbers together with all the other completions of the rationals known as the $p$-adics are very useful indeed in finding rational solutions of quadratic forms. A general principle known as Hasse principle asserts that a quadratic form in $n$ variables has rational solutions if and only if it has solutions in each completion.<|endoftext|>
TITLE: Entries of the inverse of $\left[\frac{1}{x+i+j-1}\right]_{i,j\in\{1,2,\ldots,n\}}$ are polynomials in $x$.
QUESTION [5 upvotes]: Let $n$ be a positive integer. Define $$\textbf{A}_n(x):= \left[\frac{1}{x+i+j-1}\right]_{i,j\in\{1,2,\ldots,n\}}$$ as a matrix over the field $\mathbb{Q}(x)$ of rational functions over $\mathbb{Q}$ in variable $x$.
(a) Prove that the Hilbert matrix $\textbf{A}_n(0)$ is an invertible matrix over $\mathbb{Q}$ and all entries of the inverse of $\textbf{A}_n(0)$ are integers.
(b) Determine the greatest common divisor (over $\mathbb{Z}$) of all the entries of $\big(\textbf{A}_n(0)\big)^{-1}$.
(c) Show that $\textbf{A}_n(x)$ is an invertible matrix over $\mathbb{Q}(x)$ and every entry of the inverse of $\textbf{A}_n(x)$ is a polynomial in $x$.
(d) Prove that $x+n$ is the greatest common divisor (over $\mathbb{Q}[x]$) of all the entries of $\big(\textbf{A}_n(x)\big)^{-1}$.
Parts (a) and (c) are known. Parts (b) and (d) are open. Now, Part (d) is known (see i707107's solution below), but Part (b) remains open, although it seems like the answer is $n$.
Recall that
$$\binom{t}{r}=\frac{t(t-1)(t-2)\cdots(t-r+1)}{r!}$$
for all $t\in\mathbb{Q}(x)$ and $r=0,1,2,\ldots$. According to i707107, the $(i,j)$-entry of $\big(\textbf{A}_n(x)\big)^{-1}$ is given by
$$\alpha_{i,j}(x)=(-1)^{i+j}\,(x+n)\,\binom{x+n+i-1}{i-1}\,\binom{x+n-1}{n-j}\,\binom{x+n+j-1}{n-i}\,\binom{x+i+j-2}{j-1}\,.\tag{*}$$
This means that, for all integers $k$ such that $k\notin\{-1,-2,\ldots,-2n+1\}$, the entries of $\big(\textbf{A}_n(k)\big)^{-1}$ are integers. I now have a new conjecture, which is the primary target for the bounty award.
Conjecture: The greatest common divisor $\gamma_n(k)$ over $\mathbb{Z}$ of the entries of $\big(\textbf{A}_n(k)\big)^{-1}$, where $k$ is an integer not belonging in the set $\{-1,-2,\ldots,-2n+1\}$, is given by $$\gamma_n(k)=\mathrm{lcm}(n,n+k)\,.$$
It is clear from (*) that $n+k$ must divide $\gamma_n(k)$. However, it is not yet clear to me why $n$ should divide $\gamma_n(k)$. I would like to have a proof of this conjecture, or at least a proof that $n \mid \gamma_n(k)$.
Let $M_n$ denote the (unitary) cyclic $\mathbb{Z}[x]$-module generated by $\dfrac{1}{\big((n-1)!\big)^2}\,(x+n)$. Then, the (unitary) $\mathbb{Z}[x]$-module $N_n$ generated by the entries of $\big(\textbf{A}_n(x)\big)^{-1}$ is a $\mathbb{Z}[x]$-submodule of $M_n$.
We also denote by $\tilde{M}_n$ for the (unitary) $\mathbb{Z}$-module generated by $\dfrac{1}{\big((n-1)!\big)^2}\,(x+n)\,x^l$ for $l=0,1,2,\ldots,2n-2$. Then, the (unitary) $\mathbb{Z}$-module $\tilde{N}_n$ generated by the entries of $\big(\textbf{A}_n(x)\big)^{-1}$ is a $\mathbb{Z}$-submodule of $\tilde{M}_n$.
For example, $M_2/N_2$ is isomorphic to the (unitary) $\mathbb{Z}[x]$-module $\mathbb{Z}/2\mathbb{Z}$ (in which $x$ acts trivially), and $\tilde{M}_2/\tilde{N}_2$ is isomorphic to the (unitary) $\mathbb{Z}$-module $\mathbb{Z}/2\mathbb{Z}$. Hence, $\left|M_2/N_2\right|=2=\left|\tilde{M}_2/\tilde{N}_2\right|$. For $n=3$, Mathematica yields
$$\tilde{M}_3/\tilde{N}_3\cong (\mathbb{Z}/2\mathbb{Z})\oplus(\mathbb{Z}/3\mathbb{Z})^{\oplus 2}\oplus(\mathbb{Z}/4\mathbb{Z})^{\oplus 3}\,,$$
as abelian groups. That is, $\left|\tilde{M}_3/\tilde{N}_3\right|=1152$. On the other hand,
$$M_3/N_3\cong \mathbb{Z}[x] \big/\left(12,2x^2+6x+4,x^4-x^2\right)$$
as $\mathbb{Z}[x]$-modules, which gives $\left|M_3/N_3\right|=576$.
Question: Describe the factor $\mathbb{Z}[x]$-module $M_n/N_n$ and the factor $\mathbb{Z}$-module $\tilde{M}_n/\tilde{N}_n$. It is easily seen that $\left|M_n/N_n\right|\leq\left|\tilde{M}_n/\tilde{N}_n\right|$. What are $\left|M_n/N_n\right|$ and $\left|\tilde{M}_n/\tilde{N}_n\right|$? It can be shown also that the ratio $\dfrac{\left|\tilde{M}_n/\tilde{N}_n\right|}{\left|M_n/N_n\right|}$ is an integer, provided that $\left|\tilde{M}_n/\tilde{N}_n\right|$ is finite. Compute $\dfrac{\left|\tilde{M}_n/\tilde{N}_n\right|}{\left|M_n/N_n\right|}$ for all integers $n>0$ such that $\left|\tilde{M}_n/\tilde{N}_n\right|<\infty$. Is it always the case that $\left|\tilde{M}_n/\tilde{N}_n\right|$ is finite?
Apart from the conjecture above, this question is also eligible for the bounty award. I have not yet fully tried to deal with any case involving $n>3$. However, for $n=4$, the module $\tilde{M}_4/\tilde{N}_4$ is huge:
$$ \tilde{M}_4/\tilde{N}_4\cong (\mathbb{Z}/2\mathbb{Z})^{\oplus 2}\oplus(\mathbb{Z}/3\mathbb{Z})^{\oplus 3}\oplus(\mathbb{Z}/8\mathbb{Z})^{\oplus 2}\oplus(\mathbb{Z}/9\mathbb{Z})^{\oplus 2}\oplus(\mathbb{Z}/16\mathbb{Z})\oplus(\mathbb{Z}/27\mathbb{Z})$$
as abelian groups.
REPLY [3 votes]: For Part (b), according to i707107's answer, the $(i,j)$-entry of $\textbf{H}_n:=\big(\textbf{A}_n(0)\big)^{-1}$ is given by $$h_{i,j}:=(-1)^{i+j}\,n\,\binom{n+i-1}{i-1}\,\binom{n-1}{j-1}\,\binom{n+j-1}{i+j-1}\,\binom{i+j-2}{i-1}\,.$$
Hence, $n$ is a divisor of the greatest common divisor $g_n$ over $\mathbb{Z}$ of the entries of $\textbf{H}_n$.
Note that
$$\left|h_{1,j}\right|=n\,\binom{n-1}{j-1}\,\binom{n+j-1}{j}=n\,\binom{n+j-1}{n-1}\,\binom{n-1}{j-1}=n\,\binom{n+j-1}{j-1}\,\binom{n}{j}\,;$$
in particular,
$$h_{1,1}=n^2\,.$$
Ergo, $$n\mid g_n\mid n^2\,.$$
If $p$ is a prime divisor of $n$ such that $p^k$ is the largest power of $p$ that divides $n$, then using Lucas's Theorem, we know that
$$\binom{n+p^k-1}{p^k-1}\equiv 1\pmod{p}$$
and
$$\binom{n}{p^k}\equiv \frac{n}{p^k}\pmod{p}\,.$$
Therefore, $p$ does not divide $\dfrac{h_{1,p^k}}{n}$, whence $p\nmid \dfrac{g_n}{n}$. Hence, the greatest common divisor of the entries of $\textbf{H}_n=\big(\textbf{A}_n(0)\big)^{-1}$ must be
$$g_n=n\,.$$<|endoftext|>
TITLE: Jordan form of a power of Jordan block?
QUESTION [7 upvotes]: How, in general, does one find the Jordan form of a power of a Jordan block?
Because of the comments on this question I think there is a simple answer.
REPLY [18 votes]: Let $J$ be the $n\times n$ Jordan block with eigenvalue $\lambda$. I'll assume we're working over $\mathbb C$ (or at least in characteristic $0$).
Claim: If $\lambda\neq 0$ then the Jordan normal form of $J^m$ is an $n\times n$ Jordan block with eigenvalue $\lambda^m$. If $\lambda=0$ then the Jordan normal form of $J^m$ is $r$ blocks of size $q+1$ and $m-r$ blocks of size $q$, where $m$ divides $q$ times into $n$ with remainder $r$.
Proof:
Write $J=\lambda I +N$ where $N$ contains ones on the first off-diagonal. Note that $N^m$ is the matrix with ones on the $m$th diagonal away from the main diagonal. So $N^m\neq 0$ for $mk$. So the $b_kN^k$ term is the only one affecting the $k$th diagonal, which means that if $b_kN^k$ is non-zero, then the whole expression $(J^m-\lambda I)^k$ is non-zero.
Working out $b_k$ we find it is equal to $m^k\lambda^{(m-1)k}$. Since $\lambda\neq 0$ (and the characteristic is non-zero) we can see that $b_k$ is non-zero. Which means that $b_kN^k$ is non-zero iff $N^k\neq 0$, i.e. iff $k
TITLE: How did Euler prove the partial fraction expansion of the cotangent function: $\pi\cot(\pi z)=\frac1z+\sum_{k=1}^\infty(\frac1{z-k}+\frac1{z+k})$?
QUESTION [5 upvotes]: As far as we know, Euler was the first to prove $$ \pi \cot(\pi z) = \frac{1}{z} + \sum_{k=1}^\infty \left( \frac{1}{z-k} + \frac{1}{z+k} \right).$$ I've seen several modern proofs of it and they all seem to rely either on the Herglotz trick or on the residue theorem. I recon Euler had neither nor at his disposal, so how did he prove it?
Added: Did Euler prove it for complex $z$ or just reals?
REPLY [4 votes]: Euler did have access to the infinite product representation of the sine function
$$\sin(\pi z)=\pi z\prod_{n=1}^{\infty}\left(1-\frac{z^2}{n^2}\right)$$
Then, we have
$$\begin{align}
\frac{d \log(\sin(\pi z))}{dz}&=\pi \cot(\pi z)\\\\
&=\frac1z+\sum_{n=1}^\infty \frac{2z}{z^2-n^2}\\\\
&=\frac1z+\sum_{n=1}^\infty \left(\frac{1}{z-n}+\frac{1}{z+n}\right)\\\\
\end{align}$$
as was to be shown!<|endoftext|>
TITLE: Differential Forms on the Riemann Sphere
QUESTION [6 upvotes]: I am struggling with the following exercise of Rick Miranda's "Algebraic Curves and Riemann Surfaces" (page 111):
Let $X$ be the Riemann Sphere with local coordinate $z$ in one chart and $w=1/z$ in another chart. Let $\omega$ be a meromorphic $1$-form on $X$. Show that if $\omega=f(z)\,dz$ in the coordinate $z$, then $f$ must be a rational function of $z$.
I have unfortunately no idea how I should begin the proof (since I am new to this topic). Can someone give me a hint?
Edit: The transition map is $T:z\rightarrow 1/z$. We have $\omega=f(z)dz$ in the coordinate $z$. Then we know that $\omega$ transforms into $\omega_2=f_2(w)\,dw$ as follows: $f_2(w)=f(T(w))T'(w)=f(1/w)(-1/w^2)$, but how does this help?
REPLY [4 votes]: Lemma. Every holomorphic function on a compact Riemann surface is constant.
Proof. Let $f:X \to Y$ be a nonconstant holomorphic mapping between (connected) Riemann surfaces, with $X$ compact. Then $f(X)$ is compact, therefore closed. But it is also open by the open mapping theorem. Therefore by connectedness $Y = f(X)$, and $Y$ is also compact. As $\mathbb{C}$ isn't compact, the claim follows. $\square$
Let's use the coordinate patches $(\mathbb{C},z)$ and $(\mathbb{C}^* \cup \{ \infty \}, 1/z )$. Since $f$ is meromorphic, it has only finitely many poles. We may assume that $\infty$ is not one of them (if it is the case, replace $f$ by $1/f$). Let $a_1,\ldots,a_n$ denote the poles of $f$. At the $i$th pole, $f$ has a principal part $$p_i(z) = \sum_{j=-k_i}^{-1} c_{ij}(z-a_i)^j$$
for some finite $\{k_i\}_{i=1}^n$.
Removing those yields the function $g = f - (p_1 + \cdots + p_n)$ which is holomorphic on all the Riemann sphere. But by the Lemma above, such a function must be constant. Therefore $f = g + (p_1 + \cdots + p_n)$ is rational.
REPLY [3 votes]: EDIT: By multiplying by an appropriate polynomial, we may assume that $\omega$ has poles (at most) at $0$ and $\infty$.
On $\Bbb C-\{0\}$ you now have holomorphic functions $f$ and $g$ (your $f_2$) with $$z^2f(z)=-g(1/z).$$ Since $f$ and $g$ have at worst poles at $0$, this equation tells us that each of their Laurent series has only finitely many nonzero terms.<|endoftext|>
TITLE: Is every relation which is transitive and symmetric also reflexive?
QUESTION [9 upvotes]: I have seen a proof that every relation which is symmetric and transitive is also reflexive.
if $A=\{1,2,3\}$ Then if $R=\{(1,2)(2,1)(1,1)\color{blue}{(2,2)}\}$
here $R$ is symmetric and transitive on $A$ but not reflexive right?
Can anyone clear up this confusion for me?
REPLY [2 votes]: if you tack $A=\{1,2,3\}$ and $R=\{(1,2),(1,1),(2,1),(2,2)\}$ then $R$ is both symetric and transitive relation on $A$ but not reflexive because $(3,3)\not\in R$<|endoftext|>
TITLE: Evaluating $\int_0^1\int_0^1 e^{\max\{x^2,y^2\}\,}\mathrm dx\,\mathrm dy$
QUESTION [7 upvotes]: The integral again for convenience is
$$
I=\int_0^1\int_0^1 e^{\max\{x^2,y^2\}}\,\mathrm dx\,\mathrm dy
$$
My thoughts:
Ignoring for a moment that the region is a rectangle, I hoped moving to polar coordinates might help. This gives
$$
I=\int_0^1\int_0^{2\pi}re^{r^2\max\{\cos^2 t,\sin^2t\}} \, \mathrm dt \, \mathrm dr
$$
Then since $|\cos t|\geq |\sin t|$ for $t\in D_1=[-\frac{\pi}{4},\frac{\pi}{4}]\cup [\frac{3\pi}{4},\frac{5\pi}{4}]$ but not for $t\in D_2=[\frac{\pi}{4},\frac{3\pi}{4}]\cup [\frac{5\pi}{4},\frac{7\pi}{4}]$
I think we can break $I$ into
$$
I=\left[\int_0^1\int_{D_1}re^{r^2\cos^2 t}\,\mathrm dt\,\mathrm dr\right] \left[\int_0^1\int_{D_2}re^{r^2\sin^2 t}\,\mathrm dt\,\mathrm dr\right]
$$
Aside from the problem of the region not being the same, I am stuck here. Is the work above on the right track? How do I evaluate for $[0,1]\times[0,1]$? Thanks!
REPLY [7 votes]: Do not use polar coordinates: Rectangles are bad for polar coordinates.
You have a piecewise definition that's making things difficult, so use each part of the domain separately. Break the integral up into two regions, above and below the diagonal of the unit square. On the lower half,
$$\int_0^1 \int_0^x e^{\max\{x^2, y^2\}} dy \, dx = \int_0^1 \int_0^x e^{x^2} \, dy \, dx = \int_0^1 xe^{x^2} \, dx$$
The upper half is similar.
REPLY [4 votes]: HINT:
Note that
$$I=\int_0^1\int_0^y e^{y^2}\,dx\,dy+\int_0^1\int_y^1 e^{x^2}\,dx\,dy$$
Now, interchange the order of integration in the second integral.<|endoftext|>
TITLE: Intuitive explanation of a stochastic PDE
QUESTION [7 upvotes]: Lindgren et al 2011 connects Gaussian Markov Random Fields (which have fast calculation properties due to the Markov attribute) and Gaussian Processes (which can model many types of data). The connection rests upon the fact (from Whittle 1954) that solutions to a certain stochastic partial differential equation (SPDE) defined below have a Matérn covariance (common in Gaussian Processes).
They then show that some models with defined Markov properties (like on a lattice, but they extend it to off-lattice data) are solutions to that SPDE and so all the fast calculations (such as the precision matrix) that can be done on Markov models lead to desired covariance properties. So we can do some GP calculations very quickly with this technique.
My questions are about the SPDE itself:
$$
(\kappa^2 - \Delta)^{\alpha/2}x(\mathbf{u}) = \mathcal{W}(\mathbf{u})
$$
where $\Delta = \sum \frac{\delta^2}{\delta x_i^2}$ is the Laplacian, $\mathcal{W}$ is a white-noise process, $\alpha/2$ is an integer, and $\kappa$ is a constant that represents the inverse "range" of the covariance (something like a persistence length). What does this very weird equation represent? I hate to pull an "I don't get it" so here are some specific questions:
On the LHS we have the Laplacian operator, which is the divergence of the gradient. What does a PDE with this operator imply about the solution? E.g. "$dx/dt = a$ means that x changes with speed $a$."
On the RHS we have a stochastic white noise process $\mathcal{W}$. How is this different from putting something deterministic here? In the paper they call this "driving the SPDE with white noise" but I don't know what driving means in this context.
They mention in the paper the relationship of this equation to diffusion. It would be helpful to flesh out that connection.
They further extend this model to non-stationary fields with a slightly modified SPDE:
$$
(\kappa^2(\mathbf{u}) - \Delta)^{\alpha/2}\left\{\tau(\mathbf{u})x(\mathbf{u})\right\} = \mathcal{W}(\mathbf{u})
$$
Where functions $\kappa^2(\mathbf{u})$ and $\tau(\mathbf{u})$ vary throughout space. They show this ALSO has "local" Matérn covariance but globally could be a dense covariance with interesting global correlations. How does this relate to the intuitive picture from the simpler equation?
REPLY [9 votes]: Ultimately, this is a very broad question so I won't even attempt to answer it completely. Stochastic PDEs are entire area of active research. Your confusion basically boils down to "what are SPDEs" which people spend careers answering.
Let me briefly remark on a few points:
On the LHS we have the Laplacian operator, which is the divergence of the gradient. What does a PDE with this operator imply about the solution?
If you have studied (non stochastic) PDEs you should have studied the Laplace equation, $\Delta u =0$, or with driving force $\Delta u = f$. I couldn't write up a full treatment of Laplace's equation so I won't. In most SPDEs there is a time component so there are the heat equation, $\frac{\partial u}{\partial t} =\alpha \Delta u$, and Schrödinger equation, $\frac{\partial u}{\partial t} = (i\alpha \Delta +V)u$. I can't go through a full treatment of these, either.
What you have is not exactly the Laplacian operator. You have a generalization called the fractional Laplacian operator. We define this operator by its Fourier transform. Recall: $\mathcal{F}(\Delta u )(\textbf{k})=\|\textbf{k}\|^2\mathcal{F}(u)(\textbf{k})$. So the so called "fractional Laplacian operator" has the following property: $\mathcal{F}(\Delta^{\alpha/2} u )(\textbf{k})=\|\textbf{k}\|^{\alpha}\mathcal{F}(u)(\textbf{k})$. See their definition of $(\kappa^2-\Delta)^{\alpha/2}$ here, which should not be that surprising.
On the RHS we have a stochastic white noise process W. How is this different from putting something deterministic here? In the paper they call this "driving the SPDE with white noise" but I don't know what driving means in this context.
At the heart of SPDEs is that noise term, that is the object of study. $\mathcal{W}$ is a distribution, meaning it is an object like a Dirac $\delta$ "function". It is a Gaussian distributed random field. Defining this object rigorously is not trivial. Like the Dirac $\delta$, it only makes sense when paired with nice functions (this is the integral formulation I was telling you about in the comments. This is very important). I recommend this paper by Davar Khoshnevisan, pages 1-4 (including equation 2.1) for a fairly complete definition of this process in the case of the stochastic heat equation which should give some insight. Also see this question for more information on the $\delta$ "function".
Here is a video of Field's medalist Martin Hairer explaining to a layman what and why we should study stochastic PDEs (this video is absolutely painless, no real math). This should answer your questions.
They mention in the paper the relationship of this equation to diffusion. It would be helpful to flesh out that connection.
I actually did my undergrad thesis on this connection. Basically, the (non stochastic) heat equation can be derived and solved entirely in terms of Brownian motion. Einstein did this in 1905, from a macroscopic level and Smulochowski did this from a microscopic level. I go over both derivations in my thesis. You can solve the heat equation in terms of expectation of Brownian motion. This is called the Feynman Kac formula. If you do Wick rotation to imaginary time, you get the solution to the Schrödinger equation. The solution to the Schrödinger equation is given by Brownian motion running in imaginary time. This is known as Feynman path integration, and was the subject of his PhD dissertation.
See also a post I answered on MO for this connection.
How does this relate to the intuitive picture from the simpler equation?
I'm not sure, I haven't read their paper, sorry. :)
SPDEs are a beautiful and useful field, I am glad you came across it. It is, however quite technical. I recommend a few mathematician's papers. I recommend:
Samy Tindel
Carl Mueller
Davar Khoshnevisan
I want to remark on one last thing. The field of SPDEs was recently changed by Martin Hairer's work on rough path theory and regularity structures, for which he received the Field's medal in 2014. I asked a question about this when I first joined math.SE, you can see here. I have since learned a great deal about rough path theory. You can see Martin Hairer's papers here, but be forewarned that they are extremely advanced.
I'm not sure if I answered any of your questions satisfactorily, but I hope I at least gave you some material to look for the answers. I am glad you are interested in this field!
If this is too much information, just watch the video from Martin Hairer. It is painless.<|endoftext|>
TITLE: What does the symbol := mean in mathematics?
QUESTION [12 upvotes]: What does the symbol := mean in mathematics?
for example, C(continuum) := |R|
REPLY [5 votes]: It means it is a definition in most contexts.
$f(x):=x^2-\sin 2x + \pi$
means I define $f(x)$ by the expression on the other side.
However, in most contexts, it is a superfluous notation as you argue around it.<|endoftext|>
TITLE: Function that is both midpoint convex and concave: $f\left(\frac{x+y}{2}\right) = \frac{f(x)+f(y)}{2}$
QUESTION [7 upvotes]: Which functions $f:\mathbb{R} \to \mathbb{R}$ do satisfy Jensen's functional equation
$$f\left(\frac{x+y}{2}\right) = \frac{f(x)+f(y)}{2}$$
for all $x,y \in \mathbb{R}$?
I think the only ones are of type $f(x) = c$ for some constant $c\in \mathbb{R}$ and the solutions of the Cauchy functional equation $f(x+y) = f(x)+f(y)$ and the sums and constant multiples of these functions. Are there other functions which are both midpoint convex and concave?
REPLY [17 votes]: Without loss, translate so that $f(0) = 0$. Then we have
$$f(x) = f\left(\frac{2x + 0}{2}\right) = \frac{f(2x)}{2}$$
so that $f(2x) = 2 f(x)$.
Now suppose that $f$ is midpoint convex and concave. We show it satisfies the Cauchy equation:
$$f(x + y) = f\left(\frac{2x + 2y}{2}\right) = \frac{f(2x) + f(2y)}{2} = f(x) + f(y)$$
as claimed. Now just remember that multiples of solutions to the Cauchy equation are still solutions to the Cauchy equation - hence, functions with your property are exactly translates of functions which solve the Cauchy equation.<|endoftext|>
TITLE: Complex representation of a quaternionic matrix
QUESTION [5 upvotes]: It is evident that right module $\mathbb{H}^n$ is $\mathbb{C}$-linearly isomorphic to $\mathbb{C^{2n}}$ with corresponding isomorphism $\nu : \mathbb{C^{2n}} \to\mathbb{H}^n $ given by $ \nu(a,b) = a + b\mathrm{j}$. This naturaly gives representation for any quaternionic matrix $M \in \mathcal{M}^{n \times m}(\mathbb{H}) $ with two complex matrices $A,B \in \mathcal{M}^{n \times m}(\mathbb{C})$ as $M = A + B\mathrm{j}$.
It's assumed that complex matrix representing $\nu^{-1}M\nu$ in parallel with complex representation of quaternion numbers can be written in the form
$$
\theta_{n,m}(M) = \theta_{n,m}(A+B\mathrm{j}) =
\left[\begin{matrix} A & B \\ -\overline B & \overline{A} \end{matrix}\right]$$
where $\overline{A}$ is a complex conjugate. However, what i don't understand is there this conjugation came from and I need your help.
When I write
$$ \nu^{-1}M\nu(a,b) = \nu^{-1}(A +B\mathrm{j})(a + b\mathrm j) =\nu^{-1}\left(Aa + Ab\mathrm{j} + B\overline{a}\mathrm{j} -B\overline b\right) = \left( Aa - B\overline{b}, Ab + B\overline a \right) $$
I don't have any idea what to do with conjugates to show that this map even linear.
REPLY [2 votes]: Firstly, I want to thank Jyrki Lahtonen, Arctic Tern and mathreadler for elevating my understanding.
The key reason of my confusion was usage of left matrix on vector multiplication in a right module $\mathbb{H}^n$. As it turns out in right modules it would safe only to use corresponding matrix ring acting on the module from the right. Hence, if we assume that map $\nu$ also act from the right $(a,b)\nu = a + b\mathrm{j} $ we can get desired result
$$ (a,b)\nu M \nu^{-1} = (a +b\mathrm{j})(A +B\mathrm{j})\nu^{-1}
= ( aA + aB\mathrm{j} +b\overline{A}\mathrm{j} -b\overline{B} )\nu^{-1} = (aA - b\overline{B},aB + b\overline{A}) $$
which will yeld correct matrix representation if we assume that $aA \triangleq A^\top a^{(\top)}$.
Otherwise we can just define $\mathbb{H}^n$ as a left module and get similar result.
So we can actually think about $\theta_{n,m}$ as change of global charts if we think about quaternionc matrix M as a (nonlinear) function $\mathbb{H}^n \to \mathbb{H}^m $.<|endoftext|>
TITLE: A few questions on the Gaussian integers
QUESTION [11 upvotes]: I have a few questions surrounding the Gaussian integers, which I hope can be answered together in one fell swoop.
The Gaussian integers are defined as $\mathbb{Z}[i] = \{x + iy : x, y \in \mathbb{Z}\}$. What is the intuition for working with them, and why should we care about them?
What is arithmetic like in $\mathbb{Z}[i]$?
Are there "prime numbers" in $\mathbb{Z}[i]$?
Do Gaussian integers factor into primes? If so, do they factor uniquely?
REPLY [3 votes]: At the time I posted my comment, I couldn't give a fleshed out answer. In the interest of brevity, I assumed a lot of prior knowledge, some of which I learned long ago, some of which I learned just in the past year or so. At the other extreme, an entire book could be written to answer your $4$-part question. I will now give an answer that is longer than the previous ones, but still well short of a full length book.
It is important to understand how the familiar integers fit into the larger domain of algebraic integers. The positive integers $1, 2, 3, \ldots$, the negative integers $-1, -2, -3, \ldots$ and $0$ make up the set of familiar integers which we call $\mathbb{Z}$ for short. If $a, b \in \mathbb{Z}$, $b \neq 0$ and $$r = \frac{a}{b},$$ then $r$ is a rational number. We call the set of all such rational numbers $\mathbb{Q}$ for short.
If $x$ is a positive integer, then it is a solution to the equation $x - N = 0$, where $N$ is also a positive integer. And if $x$ is a negative integer, then it is a solution to the equation $x + N = 0$, where $N$ is a positive integer. These are perfectly obvious and boring facts, but it's necessary to go over them in order to explain that the positive and negative integers are algebraic integers of degree $1$.
Now consider the number $x = 2 + i$. This is clearly not one of our familiar integers of $\mathbb{Z}$, but it is an algebraic integer, since it is a solution to the equation $x^2 - 4x + 5 = 0$. It is an algebraic integer of degree $2$ because the $x^2$ part of it has an implied coefficient of $1$, and $4$ and $5$ are both integers.
In fact, if $a, b \in \mathbb{Z}$, then $x = a + bi$ is an algebraic integer, as it satisfies the equation $x^2 - 2ax + (a^2 + b^2) = 0$.
This is not necessarily the case if $a$ and $b$ are both rational but one or both of them are not integers. For example, if $$x = \frac{1}{2} + \frac{i}{2},$$ we don't have an algebraic integer, since the relevant equation is $2x^2 - 2x + 1$, in which the leading coefficient (the one attached to $x^2$) is not $1$, but $2$. This example would work out differently if we had $\sqrt{-3}$ or $\sqrt{-7}$ or $\sqrt{-11}$, etc., instead of $i$.
Hopefully this is all the necessary background information to explain my first point, that if $a, b \in \mathbb{Z}$, then $a + bi$ is an algebraic integer, but if either $a$ or $b$ or both is any real number other than an integer, then $a + bi$ is not algebraic integer. If you care about the set of all algebraic integers of the form $a + b \theta$ where $\theta$ is some algebraic number like $i$, then you care about the Gaussian integers.
Arithmetic in $\mathbb{Z}[i]$ is not that different than the algebraic arithmetic you were taught before you even knew about imaginary numbers. To add up two Gaussian integers $a + bi$ and $c + di$, you just have to line up the real parts, add them up, line up the imaginary parts, add them up, and there you have it. Thus $(a + bi) + (c + di) = (a + c) + (b + d)i$.
For multiplication you just need to remember the mnemonic FOIL (First, Outer, Inner, Last). Thus $$(a + bi)(c + di) = ac + adi + bci + bdi^2.$$ But since, as you already know, $i^2 = -1$, $bdi^2 = -bd$ and therefore $$(a + bi)(c + di) = (ac - bd) + (ad + bc)i.$$
To answer the third and fourth points, let's backtrack to $\mathbb{Z}$. Factorize the integer $-10$. Valid answers include $(-1) \times 2 \times 5$ and $-2 \times 5$ and $2 \times -5$. These are not distinct because the only distinction is multiplication by units ($-1$ is one of the units of $\mathbb{Z}$, and $1$ is the other one). Nor do we care about order: $-5 \times 2$ is just a different ordering of $2 \times -5$. If $p$ is a positive prime, then is divisible only by $-p$, the two units, and itself.
In $\mathbb{Z}[i]$, the units, "clockwise," are $i, 1, -i, -1$. The way that you know whether an algebraic integer of degree $2$ is a unit is if in the equation $x^2 - 2Tx + N$ we have $|N| = 1$. $N$ will depend on $\theta$, but since here $\theta = i$, we have $N = a^2 + b^2$.
A number like $2 + i$ is divisible only by the units of $\mathbb{Z}[i]$, and by itself multiplied by the units, but by no other numbers of $\mathbb{Z}[i]$. This means that $2 + i$ is prime (or at least irreducible, though in this domain the distinction is not as relevant as in say, $\mathbb{Z}[\sqrt{-5}]$. So there are primes in $\mathbb{Z}[i]$.
Do they factor uniquely? Yes, they do. I might come back later and elaborate further on this point, giving a proof, or at least a link to a proof. For now, before I have to log off, I will just remind you to be aware of units. Hence $$5 = (2 - i)(2 + i) = (1 - 2i)(1 + 2i)$$ does not present two distinct factorizations. For instance, calculate $(2 - i) \times i$.<|endoftext|>
TITLE: Coloring the pentagonal hexecontahedron
QUESTION [7 upvotes]: So, I'd like to color the pentagonal hexecontahedron in a way that is satisfying aesthetically and mathematically. For me this equates to, in order of priority -
1. No same-colored faces can share an edge
2. We must use as few colors as possible
3. We must have as much symmetry as possible
Here's a
5-colored pentagonal hexecontahedron - 5 colors is a lot, but it's nice that there are apparently no adjacent faces with the same color. Not clearly very symmetrical though (if it is, it's obscured by the large number of colors).
By the four-color theorem, we know it can be done with 4. I know it can't be done with 3, proof below. So, the aesthetics question boils down to "how symmetrical a 4-coloring can we make?".
By the way, it is possible to color the deltoid hexecontahedron with only three colors, without same colored faces touching, and with chiral octahedral symmetry. Here's a lovely piece of origami designed by Tomoko Fuse that does this.
Answer this question and you will be credited in a documentary about virus structure! I'm coloring a Satellite Tobacco Mosaic Virus
EDIT so the origami one I show is actually quite symmetrical! But still not hugely useable since you (I) have to stare at it for a while to see it.
REPLY [3 votes]: Here's one 4-colouring:
Front:
Back:
For convenience, here is a list of vertices with xyz coordinates, and
here is a list giving
the vertex numbers of each face and the colour of the face.<|endoftext|>
TITLE: Differential Geometry for General Relativity
QUESTION [8 upvotes]: I'm going to start self-studying General Relativity from Sean Caroll's Spacetime and Geometry: An Introduction to General Relativity. I'd like to have a textbook on Differential Geometry/Calculus on Manifolds for me on the side.
I do like mathematical rigor, and I'd like a textbook whose focus caters to my need. Having said that, I don't want a exchaustive mathematics textbook (although I'd appreciate one) that'll hinder me from going back to the physics in a timely manner.
I looked for example at Lee's textbook but it seemed too advanced. I have done courses on Single and Multivariable Calculus, Linear Algebra, Analysis I and II and Topology but I'm not sure what book would be the most useful for me given that I have a knack of seeing all results formally.
P.S: I'm a student of physics with a mathematical leaning.
REPLY [8 votes]: I wanted to recommend Lee, but since you said it's too advanced... Well, to be fair, while his book is quite extensive, it is a very pedagogically written one too, so if you wish to study manifolds, at one point at least, you should read it.
I am not sure that's what you are looking for, but there are some GR books that discuss differential geometry in bit more detail and rigour than Carroll's book, these would be for example
Wald: General Relativity
Straumann: General Relativity With Applications To Astrophysics
Hawking & Ellis: The Large-Scale Structure Of Spacetime
The last third of Straumann's book is essentially differential geometry, and he is quite rigorous.
For pure math books you could try
Spivak: A Comprehensive Introduction To Differential Geometry
This is essentially a 5-volume grimoire, however it builds everything up quite slowly and pedagogically, and makes an attempt to build a bridge between the old formalism (indices, coordinates, etc.) and the modern one
Isham: Modern Differential Geometry For Physicists
This one does not actually treat Riemannian geometry as far as I recall, but was written specifically for physics people, and also it has a nice account of principal bundles.
Boothby: An Introduction To Differentiable Manifolds And Riemannian Geometry
About as advanced as Lee, I believe. Also this book does treat Riemannian geometry, as you can infer from the title.
Warner: Foundations Of Differentible Manifolds and Lie Groups
Kobayashi & Nomizu: Foundations Of Differential Geometry
This is a very advanced book that is quite hard to read, so I'd suggest visiting this later. However, it is also quite essential. Despite the fact that this (two-volume) book is quite old, it is still the standard reference in the field. The contents of volume 1 is what would interest you more, probably, as the most of Riemannian geometry is being treated there.<|endoftext|>
TITLE: Proving Wilson's theorem
QUESTION [5 upvotes]: Wilson's theorem: if $p$ is prime then $(p-1)! \equiv -1(mod$ $ p)$
Approach:
$$(p-1)!=1*2*3*....*p-1$$
My teacher said in class that the gcd of every integer less than p and p is 1, so every integer has a multiplicative inverse $(mod$ $ p)$. He also said that the multiplicative inverses of each integer less than p is in the same set of integers less than p (This idea seems to be right, but does it have to be proven?). the multiplicative inverses of 1 and p-1 are self inverses (Drawing different mod grids, it looks like it's right, but again how is that true?). He concluded the following:
$$1*(p-1)*(a_1a_1^{-1}*.....*a_{{p-3}/2}*{a_{{p-3}/2}}^{-1}) \equiv -1(mod\text{ } p)$$
So he is grouping all the elements with distinct multiplicative inverse. This makes sense because there are p-3 elements with distinct multiplicative inverses and p-3 is even, so we can group them in pairs. How do we know that one multiplicative inverse corresponds to just one number, so we can group them in such an easy way?.
REPLY [2 votes]: One has indeed an equivalence $$p\text{ is prime }\iff(p-1)!\equiv -1\pmod p$$
(1)$\space p\text{ is prime }\Rightarrow(p-1)!\equiv -1\pmod p$
By Fermat's little theorem and because each of $1,2,3,...(p-2),(p-1)$ is coprime with $p$ we have $(p-2)!\equiv((p-2)!)^{ p-1}\equiv 1 \pmod p\Rightarrow (p-1)!\equiv (p-1)\cdot 1\equiv -1\pmod p$.
(2)$\space (p-1)!\equiv -1\pmod p\Rightarrow p\text{ is prime }$.
Suppose $p$ is composed then its (positive) divisors are in $\{1,2,3,...,(p-2),(p-1)\}$. This implies that $\ g.c.d((p-1)!,p)\gt 1$. Now if $\space (p-1)!\equiv -1\pmod p$ then dividing by a (proper) divisor $d$ of $p$ and of $(p-1)!$ the equality
$(p-1)!=-1+pm$ (equivalent to the congruence) one has $d$ must divide $-1$, absurde.Thus $p$ is not composed.<|endoftext|>
TITLE: What is $\sum_{i=1}^{n}\frac{F_i}{i}$?
QUESTION [5 upvotes]: Mathematica is able to evaluate the summation $\sum_{i=1}^{n}\frac{F_i}{i}$ in terms of the Lerch transcendent. It is natural to consider whether or not this summation can be expressed in a more simple way.
It is conceivable that it is possible that $a_{n} = \sum_{i=1}^{n}\frac{F_i}{i}$ may be expressed in a more compact way, perhaps in terms of Fibonacci and Lucas numbers somehow, using the nearest integer function or the floor function.
Integer sequences such as $(\lfloor a_{n} \rfloor)_{n \in \mathbb{N}}$, $([a_{n}])_{n \in \mathbb{N}}$, $( n! a_{n} )_{n \in \mathbb{N}}$, and $( \text{den}(a_{n}) )_{n \in \mathbb{N}}$ are not currently in the On-Line Encyclopedia of Integer Sequences.
It appears that $\lim_{n \to \infty} \frac{\ln a_{n}}{n} \approx 0.481$ converges to a constant. What is this constant? Can this constant be used to somehow express the sequence $(a_{n})_{n \in \mathbb{N}}$ in closed form?
REPLY [4 votes]: EXPANDED:
Let's define :
$$\tag{1}a_n(x):=\sum_{k=1}^{n}\frac{F_k}{k}x^k$$
and use $\,F_k=\dfrac {\varphi^{\,k}-\psi^{,k}}{\sqrt{5}}\;$ with $\;\varphi=\dfrac{1+\sqrt{5}}2,\ \psi=\dfrac{1-\sqrt{5}}2=-\dfrac 1{\varphi}\;$ (as suggested by Aravind and Wikipedia) then :
\begin{align}
a_n(x)&=\frac 1{\sqrt{5}}\sum_{k=1}^{n}\left(\varphi^{\,k}-\psi^{,k}\right)\frac{x^k}k\tag{2}\\
&=\frac 1{\sqrt{5}} \int\sum_{k=1}^{n}\left(\varphi^{\,k}-\psi^{,k}\right) \frac {x^{k}}x\,dx\\
&=\frac 1{\sqrt{5}}\left(\int \frac{(\varphi\,x)^{\,n}-1}{\varphi\,x-1}\varphi\,dx-\int\frac{(\psi\,x)^{\,n}-1}{\psi\,x-1}\psi\,dx\right)\\
&=\frac 1{\sqrt{5}}\int_{\psi\,x}^{\varphi\,x} \frac{u^{\,n}-1}{u-1}\,du=\frac 1{\sqrt{5}}\int_{\psi\,x}^{\varphi\,x}\ \sum_{k=0}^{n-1}\;u^{\,k}\,du\tag{3}\\
&\quad\text{writing the integrand at the left as a sum of two terms gives :}\\\\
a_n:=a_n(1)&=\frac 1{\sqrt{5}}\left(-B_{\varphi}(n+1,0)+B_{\psi}(n+1,0)-\log(1-\varphi)+\log(1-\psi)\right)\tag{4}\\
\end{align}
with $B_x(a,b)$ the incomplete beta function (the singularities in the integrals are removable and will not be considered).
All this is probably not very useful for practical evaluations but $(2)$ applied to $x=1$ allows to find a closed form for your limit $\;\displaystyle l:=\lim_{n \to \infty} \frac{\ln a_{n}}{n}$ using $\varphi>1$ and $|\psi|<1$ :
\begin{align}
a_n&=\frac 1{\sqrt{5}}\sum_{k=1}^{n}\frac{1}k\left(\varphi^{\,k}-\psi^{,k}\right)\\
a_n&\sim \frac 1{\sqrt{5}}\sum_{k=1}^{n}\frac{\varphi^{\,k}}k,\quad n\to\infty\\
&\sim \frac 1{\sqrt{5}}\frac{\varphi^{\,n}}{n}\sum_{k=1}^{n}\frac{\varphi^{\,k-n}\;n}k,\quad n\to\infty\\
&\quad\text{setting $\;j:=n-k\,$ gives}\\
&\sim \frac 1{\sqrt{5}}\frac{\varphi^{\,n}}{n}\sum_{j=0}^{n-1}\frac 1{\varphi^{\,j}\;\frac {n-j}n},\quad n\to\infty\\
&\sim \frac 1{\sqrt{5}}\frac{\varphi^{\,n}}{n}\sum_{j=0}^{n-1}\frac {1+\sum_{m>0} \left(\frac jn\right)^m}{\varphi^{\,j}},\quad n\to\infty\\
&\sim \frac 1{\sqrt{5}}\frac{\varphi^{\,n}}{n}\left(\frac 1{1-\frac 1{\varphi}}+\sum_{m>0}\frac 1{n^m}\sum_{j=0}^{n-1}\frac {j^m}{\varphi^{\,j}}\right) ,\quad n\to\infty\\
&\sim \frac 1{\sqrt{5}}\frac{\varphi^{\,n}}{n}\left(\frac {\varphi}{\varphi-1}+\sum_{m>0}\frac {\Phi\left(\frac 1{\varphi},-m,0\right)}{n^m}\right) ,\quad n\to\infty\tag{5}\\
a_n&\sim \frac{\varphi^{\,n+1}}{\sqrt{5}\;n\,(\varphi-1)},\quad n\to\infty\tag{6}\\
\end{align}
With $\Phi$ the Lerch transcendent $\;\displaystyle\Phi(z,s,a):=\sum_{n\ge 0}\frac{z^n}{(n+a)^s}$.
The limit will simply be :
$$l=\lim_{n \to \infty} \frac{\ln a_{n}}{n}=\log(\varphi)=\log\dfrac{1+\sqrt{5}}2\approx 0.4812118251$$
$$-$$
Let's use $(5)$ to get a more precise expansion defining $\;\displaystyle f_m(z):=\sum_{n\ge 0}n^m\,z^n=\Phi(z,-m,0)$.
For $m$ a nonnegative integer we have $\;\displaystyle f_{m+1}(z)=z\,f_{m}(z)'=z\sum_{n\ge 0}n^m\,n\,z^{n-1}=\sum_{n\ge 0}n^{m+1}\,z^{n}\ $
giving us the closed forms :
\begin{align}
f_0(z)&=\frac 1{(1-z)}\\
f_1(z)&=\frac z{(1-z)^2}\\
f_2(z)&=\frac {z^2+z}{(1-z)^3}\\
f_3(z)&=\frac {z^3+4z^2+z}{(1-z)^4}\\
\cdots
\end{align}
Using $\;z:=\dfrac 1{\varphi}\;$ so that $\,\dfrac 1{1-z}=\dfrac {\varphi}{\varphi-1}\;$ we may rewrite $(5)$ as :
\begin{align}
a_n&\sim \frac 1{\sqrt{5}}\frac{\varphi^{\,n}}{n}\left(\frac {\varphi}{\varphi-1}+\frac {\varphi}{(\varphi-1)^2\,n}+\frac {\varphi+\varphi^2}{(\varphi-1)^3\,n^2}+\frac {\varphi+4\varphi^2+\varphi^3}{(\varphi-1)^4\,n^3}+\cdots\right) \\
a_n&\sim \frac 1{\sqrt{5}}\frac{\varphi^{\,n}}{n}\left(\varphi+1+\frac {2\varphi+1}{n}+\frac {8\varphi+5}{n^2}+\frac {50\varphi+31}{n^3}+\frac{416\varphi+257}{n^4}+\cdots\right) \\
\end{align}
The sequences of integers appearing there are known : OEIS A000557 and A000556 allowing us to write this exponential generating function :
\begin{align}
\frac{\varphi+e^{-x}}{1-2\sinh(x)}&=(\varphi+1)+(2\varphi+1)\frac x{1!}+(8\varphi+5)\frac {x^2}{2!}+(50\varphi+31)\frac {x^3}{3!}+(416\varphi+257)\frac {x^4}{4!}+\cdots\\
\end{align}
The OEIS links should help you to find further relations and interesting links.<|endoftext|>
TITLE: What is the rank of the matrix consisting of all permutations of one vector?
QUESTION [12 upvotes]: Let $a=(a_1,...,a_n)^\top\in\mathbb{R}^n$ be a column vector and let $M_1,...,M_{n!}$ denote all $n\times n$ permutation matrices. When is the rank of the matrix that consists of all possible permutations of $a$:
$$ A=[M_1 a \,|\; ... \; |\, M_{n!} a]\in\mathbb{R}^{n\times n!} $$
equal to $n$?
Obviously, $rank(A)\le n$ and if all entries of $a$ are identical, then $rank(A)=1$. Moreover, if $A$ has rank $n$, then there exist two entries $i,j$ s.t. $a_i\not=a_j$. Is the converse statement also true?
REPLY [21 votes]: The rank takes the following $4$ possible values in the following situations:
Rank $0$: $a = 0$.
Rank $1$: the $a_i$ are all equal to some nonzero scalar.
Rank $n-1$: the $a_i$ sum to $0$, and at least one is not zero.
Rank $n$: otherwise.
This is not hard to prove directly but can be fit into the general context of representation theory. $\mathbb{R}^n$ is a representation of the symmetric group, and it decomposes as a direct sum of two irreducible representations, namely the trivial representation (spanned by the all-ones vector) and an irreducible representation of dimension $n-1$ (vectors summing to zero). If $v \in \mathbb{R}^n$ is a vector, then
$$\text{span}(gv : g \in S_n)$$
is an invariant subspace of $\mathbb{R}^n$, and so must be a sum of irreducible subrepresentations. Moreover, in this case (although not in general) every such sum occurs. This gives the above.<|endoftext|>
TITLE: Relating the normal bundle and trivial bundles of $S^n$ to the tautological and trivial line bundles of $\mathbb{R}P^n$
QUESTION [5 upvotes]: On page $10$ of Hatcher's Vector Bundles and K Theory, he gives a proof that the Whitney sum of the trivial line bundle over $\mathbb{R}P^n$ and the tangent bundle is equal to the Whitney sum of copies of the tautological line bundle.
A summary of the proof is
1) The image of the tangent bundle of the sphere under the quotient is the tangent bundle of real projective space, and the image of the normal bundle is the trivial line bundle over real projective space.
2) He says the sum of the tangent bundle and normal bundle of the sphere is trivial.
3) He then proves that the trivial line bundle over the sphere is isomorphic to the normal bundle and proves that the normal bundle under the quotient is mapped to the tautological line bundle over real projective space.
But this doesn't make any sense because he claimed earlier that the image of the normal bundle is trivial and the tautological bundle is NOT trivial. Could someone please clarify what has happened? Is the image of the normal bundle of the sphere under the quotient the tautological bundle or the trivial bundle?
REPLY [4 votes]: Your point $1$ is correct. If you meant the direct sum of the tangent bundle and the normal bundle of the sphere is trivial, then point $2$ is also correct. However, point $3$ is wrong.
Hatcher shows that under the quotient map, the trivial bundle $S^n\times\mathbb{R}^{n+1}$ on $S^n$ is sent to $E^{\oplus(n+1)}$ on $\mathbb{RP}^n$ where $E$ is the line bundle on $\mathbb{RP}^n$ obtained by making the identification $(x, t) \sim (-x, -t)$ in the trivial bundle $S^n\times\mathbb{R}$. As has already been shown, there is an isomorphism $S^n\times\mathbb{R}\cong NS^n$, which is given by $(x, t) = (x, tx)$. Under this isomorphism, the identification $(x, t) \sim (-x, -t)$ becomes $(x, tx) \sim (-x, tx)$. This is not the identification used when showing the trivial bundle is sent to the trivial bundle, that identification is $(x, tx) \sim (-x, -tx)$.
You may not know why yet, but every line bundle on $S^n$ is trivial. This is not true on $\mathbb{RP}^n$ though; there is the trivial bundle $\varepsilon^1$ and the tautological bundle $\gamma^1$ which is non-trivial. If $f : S^n \to \mathbb{RP}^n$ is the quotient map, $f^*\varepsilon^1$ and $f^*\gamma^1$ are line bundles on $S^n$ and are therefore trivial. That is, both $\varepsilon^1$ and $\gamma^1$ on $\mathbb{RP}^n$ arise from the trivial bundle on $S^n$, even though $\varepsilon^1$ and $\gamma^1$ are not isomorphic. That's what's going on here.<|endoftext|>
TITLE: Why is the smallest Pythagorean triple $(x,y,z)=(3,4,5)$ not close (in ratio $x/y$) to any other small triple?
QUESTION [5 upvotes]: The table below lists the primitive Pythagorean triples $x^2+y^2=z^2$ with $z<100$ in ascending order of the ratio $x/y$. The final column shows the difference between each ratio and the preceding ratio in the list.
It can be seen that the differences in ratio (highlighted in red) before and after the smallest triple (3,4,5) are much larger than any other in the list. The differences (in green) before and after the next smallest triple (5,12,13) are also relatively large.
Question: Why are there no other small primitive Pythagorean triples close (in terms of the ratio) to (3,4,5)? Or is this just coincidence?
Given the general formula for Pythagorean triples $(m^2-n^2,2mn,m^2+n^2)$, the question seems to amount to showing that the ratio:
$$R=\frac{m^2-n^2}{2mn}$$
cannot be close to either $3/4$ or $4/3$ unless $m^2+n^2$ is fairly large. But I can't see how to proceed, other than by a case-by-case examination which would be equivalent to listing triples.
REPLY [4 votes]: Note that $\frac{3}{4}=\frac{r^2-1}{2r}$ has a solution $r=2$. Let now $r:=2+d$ for some rational number $d$, so that $$\delta:=\frac{r^2-1}{2r}-\frac{3}{4}=\frac{3+4d+d^2}{4+2d}-\frac{3}{4}=\frac{d(5+2d)}{4(2+d)}\,.$$
If you want $|\delta|\leq\frac{1}{20}$, then $$-0.07935\frac{q(q+4p)}{2}+2q^2> 2q^2 \geq 2\cdot 13^2=338>100\,,$$
as $|d|<\frac{1}{4}$ (making $q+4p>0$). The smallest $z$ that satisfies $|\delta|<\frac{1}{20}$ is $z=397$, i.e., for $(x,y,z)=(228,325,397)$, where $$\frac{3}{4}-\frac{x}{y}\approx 0.75- 0.701538 \lesssim 0.048462<\frac{1}{20}\,.$$
In fact, for any rational number $u$ and $\epsilon>0$, there exists $r\in\mathbb{Q}$ such that $$0<\left|\frac{r^2-1}{2r}-u\right|<\epsilon\,.$$
Take $u:=\dfrac{3}{4}$ and $\epsilon=\dfrac{1}{50000}$, then $r=\dfrac{100001}{50000}$ is a solution. Then, let $x,y\in\mathbb{N}$ be such that $\dfrac{x}{y}=\dfrac{r^2-1}{2r}$, and you will get a Pythagorean triplet $$(x,y,z)=\left(7500200001,10000100000,12500200001\right)\,,$$ with $$\frac{x}{y}=\frac{7500200001}{10000100000}\approx 0.750012$$ so that $$0<\left|\frac{x}{y}-u\right|<\epsilon\,.$$<|endoftext|>
TITLE: Confused about Domain of Unbounded Operators (Hilbert Spaces)
QUESTION [5 upvotes]: I understand that a bounded (linear) operator $A$ on a Hilbert space $H$ satisfies a condition $||Av|| \le c ||v|| $ for some fixed real number $c$ and for all $v \in H$. So, the domain of a bounded operator is all of $H$.
Presumably an unbounded operator fails to satisfy this requirement in some way. On the other hand some references seem to take "unbounded" as "not necessarily bounded" for example, in nLab, https://ncatlab.org/nlab/show/unbounded+operator, "In particular every bounded operator $A: \mathcal{H} \to \mathcal{H}$ is an unbounded operator", So is an ubounded operator just an operator with a specified domain ?
From what I've read, an unbounded operator has a domain $D(A) \subset H$, often taken to be dense in $H$. It isn't clear to me whether or not an unbounded operator is bounded on its domain ? I suspect not since the closed graph theorem says a closed operator defined on the whole space is bounded which suggests the existence of non-closed operators defined for the whole space which are not bounded.
REPLY [3 votes]: It seems that both conventions are used. In my world, the class of unbounded operators includes the class of bounded operators. I think that is a very common convention, although it would perhaps be more precise to refer to such operators as 'partially defined operators'. I was quite surprised when I read the tag description on this site, since I had never before seen the class of unbounded operators defined in a way such as to rule out bounded operators. On the other hand, it is hard to refute the soundness of such a convention.
Any unbounded, densely defined operator which is bounded on its domain extends uniquely to a bounded operator on the full Hilbert space. So any closed operator which is not defined on all of $H$ is necessarily not bounded on its domain. However, any unbounded operator A is bounded on its domain with respect to the graph norm $\lVert x \rVert_A = \lVert Ax \rVert + \lVert x \rVert$, and $\mathrm{Dom}(A)$ is complete with respect to the graph norm if and only if $A$ is closed. In this way, boundedness plays a crucial role in the theory of unbounded operators.<|endoftext|>
TITLE: Approximation to unsolvable system of equations
QUESTION [8 upvotes]: I am working on a project and need to find the "closest" numerical values that satisfy the following equations:
\begin{equation}
\left\{
\begin{array}{}
A \cdot C = \frac{1}{2} \\
A \cdot D = \frac{5}{6} \\
B \cdot C = \frac{1}{8} \\
B \cdot D = \frac{1}{2}
\end{array}
\right.
\end{equation}
Where $A$ and $B$ are under the constraint that they are non-negative integers and that $C$ and $D$ must be greater than zero but less than or equal to $1$.
From what I have tried so far, it seems no analytical solution exists. For my purposes an approximate solution will suffice (matches the left-hand side of the equation to several decimal points). Having obtained my BS in engineering, my guilty pleasure is Excel's Solver add-in. Using this tool to the best of my ability, I have yet to obtain a satisfactory approximate solution.
I am not asking anyone to "solve" this for me (though I wouldn't object), but would appreciate being pointed in the right direction. To summarize:
I need to find values for $A$, $B$, $C$, and $D$ which fall under the
constraints mentioned above and provide the closest values to the
actual values on the left-hand side of the above equations.
If the closest values are still unsatisfactory, is there someway of
proving that they are indeed the "best" set of values (assuming the
method used to find them does not inherently identify the "best"
set).
If there is a matter of tradeoff in accuracy vs integer size, I would
prefer the smallest value integers possible that still provide
reasonably close solutions to the equations (several decimal points).
REPLY [2 votes]: None of the answers thus far have dealt with the integer constraint on $(A,B)$.
If we define two vectors, one integer and one real,
$$\eqalign{
a &= [\, A \,\, B \,]^T \cr
x &= [\, C \,\, D \,]^T \cr
}$$
Then the problem is to minimize the function $$f=\|M-ax^T\|^2_F$$
where $M = [\, 1/2 \,\,\, 5/6;\, 1/8 \,\,\,1/2 \,]$.
Assume that the integer vector, $a$, is known or given. Then by setting the gradient zero, we can solve explicitly for the optimum real vector
$$\eqalign{ x &= \frac{M^Ta}{a^Ta} }$$
Now all we need to do is write a program to search through lots of integer vectors.
#!/usr/bin/env julia
M = [ 1/2 5/6; 1/8 1/2 ];
n=999; fmin=5.0;
for i=1:n, j=1:n
a = [i j]';
x = M'*a/(a'*a);
f = vecnorm(M-a*x');
if f < fmin
@printf("(%d,%d), ", i,j)
fmin = f
end
end
(1,1), (2,1), (15,8), (17,9), (19,10), (21,11), (23,12), (25,13),
(27,14), (29,15), (147,76), (176,91), (205,106), (234,121),
(263,136), (292,151), (555,287), (847,438)
Note that the ratio (i/j) is approximately $1.9$ for all these solutions. To speed things up, we can use this ratio to limit the range on the second index, and extend the search range to even larger integers.
#!/usr/bin/env julia
M = [ 1/2 5/6; 1/8 1/2 ];
i,j = (847,438);
a=[i j]'; x=M'*a/(a'*a);
n=99999; fmin=vecnorm(M-a*x');
for i=1:n, j=round(Int,i/2.1):round(Int,i/1.7)
a = [i j]'
x = M'*a/(a'*a)
f = vecnorm(x*a'-M')
if f < fmin
@printf("(%d,%d), ", i,j)
fmin = f
end
end
(1139,589), (3125,1616), (4264,2205),
(5403,2794), (9667,4999), (52599,27200)
The solution corresponding to the penultimate index pair is
$$\eqalign{
a &= [\, 9667 \,\,\, 4999 \,]^T \cr
x &= [\, 4.6085224452467385 \,\,\, 8.91189971076149 \,]^T \times 10^{-5} \cr
f &= 0.017838286620463398 \cr
}$$
The best of the small ($\le 100$) index pairs is
$$\eqalign{
a &= [\, 29 \,\,\, 15 \,]^T \cr
x &= [\, 0.015361163227016885 \,\,\, 0.029706066291432145 \,]^T \cr
f &= 0.01783829737335834 \cr
}$$<|endoftext|>
TITLE: Almost sure convergence implies convegence in distribution - proof using monotone convergence
QUESTION [6 upvotes]: I'm trying to understand the following proof of the statement : "Almost sure convergence implies convegence in distribution"
The definition of convergence in distribution is given as follows :
$X_n$ converges in distribution to $X$ if and only if for all bounded real function $f$ we have :
$$\lim_{n \rightarrow +\infty }E\left[f(X_n)\right]=E\left[f(X)\right]$$
The proof goes like this :
If $X_n$ converges almost surely to $X$ then $f(X_n)$ converges almost surely to $f(X)$. Now using the dominated convergence theorem which is :
$$\lim_{n \rightarrow +\infty } \int f_n d\mu = \int f d\mu $$
we get :
$$\lim_{n \rightarrow +\infty }E[f(X_n)]=E[f(X)].$$
My question is this : How using the dominated convergence theorem gets us from :
$$f(X_n)\rightarrow^{a.s.} f(X)$$ to $$\lim_{n \rightarrow +\infty }E[f(X_n)]=E[f(X)]$$
given that the almost sure convergence is given by : $P\left[\lim_{n \rightarrow + \infty} X_n = X\right] = 1$?
One of my attempts is to write the convergence in distribution in the form of integrals like this :
$$\lim_{n \rightarrow +\infty }E[f(X_n)]=E[f(X)]$$
is equivalent to :
$$\lim_{n \rightarrow +\infty }\int f(y) \phi_n(y) dy=\int f(y) \phi(y) dy$$
with $\phi$ the density of $X$ and $\phi_n$ the density of $X_n$. But this is a little different from the monotone convergence theorem result. In the dominated convergence theorem we have the same measure, $\mu$, but writing the expectations gives us two different measures, which are $\phi_n dy$ and $\phi dy$, respectively the cumulative distributions for $X_n$ and $X$
Any help please? I appreciate if you can tell me why my attempt is not leading anywhere and at the same time give your own proof. Thank you!
REPLY [4 votes]: In order to apply dominated convergence, write the expectations as integrals over the probability space $(\Omega,{\cal F},P)$, not the real line:
$$E(f(X_n))=\int_\Omega f(X_n(\omega))\,P(d\omega)\to\int_\Omega f(X(\omega))\,P(d\omega)=E(f(X)).$$<|endoftext|>
TITLE: Matrix equation $A^2+A=I$ when $\det(A) = 1$
QUESTION [8 upvotes]: I have to solve the following problem:
find the matrix $A \in M_{n \times n}(\mathbb{R})$ such that:
$$A^2+A=I$$ and $\det(A)=1$.
How many of these matrices can be found when $n$ is given?
Thanks in advance.
REPLY [2 votes]: Consider the Jordan Canonical Form for $A$; that is, $A = PJP^{-1}$ for some invertible $P$ and block diagonal matrix $J$ whose blocks are either diagonal or Jordan (same entry on the diagonal, have $1$s on the the diagonal above the main diagonal, and $0$s elsewhere).
Then, the equation reduces to $J^2 + J = I$. Looking at the diagonal entries on both sides, this reduces to solving $r^2 + r = 1$, which has solutions $r = \frac{-1 \pm \sqrt{5}}{2}$. In other words, $J$ has diagonal entries are $\frac{-1 \pm \sqrt{5}}{2}$. However, we want $|A| = 1$, which is equivalent to $|J| = 1$. This is only possible if $n$ is even and $J$ has an equal even number of $\frac{-1 + \sqrt{5}}{2}$ and $\frac{-1 - \sqrt{5}}{2}$ on its diagonal, due to $(\frac{-1 + \sqrt{5}}{2})(\frac{-1 -\sqrt{5}}{2}) = -1$ and no positive integral power of $\frac{-1 \pm \sqrt{5}}{2}$ equaling $\pm 1$.
As a summary, there are no solutions when $n$ is not a multiple of $4$.
Otherwise, when $n$ is not a multiple of $4$, there are infinitely many solutions. To illustrate this, let $J$ be diagonal with an equal and even number of $\frac{-1 + \sqrt{5}}{2}$ and $\frac{-1 - \sqrt{5}}{2}$ on its diagonal. Letting $P$ vary (as there are infinitely many non-invertible matrices that do not commute with $J$) will give infinitely many possibilities for $A$ when $n$ is a multiple of $4$.<|endoftext|>
TITLE: Differentiation under the integral sign for volumes in higher dimensions
QUESTION [8 upvotes]: Consider a smooth convex/compact domain $D\subset \mathbb{R}^n$ and a smooth, concave function $F:D\to \mathbb{R}$. Then we can define the function that simply takes the volume of the upper contour sets determined by the argument:
$$G(t) = \int_{\{x\in D \; : \; F(x) \ge t\}} d\lambda$$
where $\lambda$ denotes the Lebesgue measure. I'm trying to figure out an expression for $\frac{d}{dt}G(t)$.
This seems like nothing more than a special case of a higher-dimensional Leibniz Integral Rule, but wikipedia gives me a substantially more general formula than I suspect I need for this case (for definitions of terms see the link):
$$\frac{d}{dt} \int_{\Omega(t)} \omega = \int_{\Omega(t)} i_{\vec{v}}(d_x \omega) + \int_{\partial \Omega(t)} i_{\vec{v}}\omega + \int_{\Omega(t)} \dot{\omega}.$$
I have almost no background in differential forms, but immediately I know, for starters, the volume form I'm integrating is time invariant so the last term drops out here. Moreover, given I'm just concerned with a uniform density, I'd imagine the first term should be zero too? (This corresponding to the intuition that all that really matters here is how much 'volume bleeds out of the bag $\Omega(t)$' as I cinch it shut by increasing $t$, and hence I need only be concerned with the incremental flow of volume across the boundary.) But that may be wildly incorrect.
Ideally if someone could help guide me (ideally both intuitively and analytically) to be able to understand and describe this derivative I'd be very grateful! In particular an expression for what the Leibniz rule reduces to in this case would be most welcome.
REPLY [2 votes]: If you're just considering volumes in $\mathbb R^n$, there's no need to get differential forms involved - just use the "Reynolds transport theorem" from that same Wikipedia page, which in this case just gives the boundary integral
$$\frac d{dt}G(t) = \int_{\{ F = t\}} \mathbf{v}\cdot \mathbf{\nu}\ dA$$
where $\mathbf{v} = \nabla F / |\nabla F|^2$ is the velocity vector field of the boundary, $\mathbf{\nu}$ is the outwards unit normal and $dA$ is the hyperarea element on $\{F =t\} = \partial \{F\le t\}$. So we have two things to justify: why is this formula true, and why is that the correct expression for the velocity?
The intuition behind the integral formula is pretty simple: if the boundary moves in the outwards normal direction with speed $s$, then over a time interval $dt$, part of the boundary with hyperarea $dA$ will sweep out the extra volume $s\ dt\ dA$; so the whole boundary will sweep out the extra volume $dG = dt \int s\ dA$. Since moving the boundary with velocity $\bf v$ is equivalent to moving it normally with velocity $\bf v \cdot \nu$, we get $dG/dt = \int{\bf v \cdot \nu}\ dA$ as desired.
To justify this rigorously is somewhat difficult - in this smooth case I think we need to introduce the idea of flows and Lie derivatives, which I won't try to do here. More generally we could cite something like the co-area formula with $u=F$, $g=|\nabla F|^{-1}$.
For the velocity expression, note that there is some freedom in how we choose $\bf v$, since we could reparametrize the boundary as we move through time. (What remains invariant is the normal velocity $\bf v \cdot \nu$.) Thus we just need to find some path $\mathbf{x}(t)$ that stays in $\{F = t\}$, and the velocity of this path will give us a velocity of the boundary. The guess $\mathbf{v}=\nabla F / |\nabla F|^2$ should be somewhat intuitive - greater $|\nabla F|$ means $F$ has a steeper slope, so you have to move slower in order to achieve the same rate of change of $F$. If $\mathbf{x}'(t)=\nabla F/|\nabla F|^2$ then from the chain rule we can compute $\frac d{dt}F(\mathbf{x}(t)) = \nabla F \cdot \mathbf{x}'(t)=1$, so if $F(\mathbf{x}(t))=t$ then $F(\mathbf{x}(s))= t+\int_t^{s} 1 = s$ for all later times $s$; i.e. $\mathbf{x}(t)$ stays on the boundary as desired.<|endoftext|>
TITLE: Is a weakly differentiable function differentiable almost everywhere?
QUESTION [7 upvotes]: I am working with Sobolev spaces. Let's suppose $\Omega \subset \mathbb{R}^n$ is an open set.
A function $u: \mathbb{R}^n \to \mathbb{R}$ in $L^1(\Omega)$ is said to be weakly differentiable if there exist functions $ g_1,...,g_n $ such that $$\qquad\qquad\qquad\qquad\qquad\int_{\Omega}u\varphi_{x_i}=-\int_{\Omega}g_i \varphi \quad \quad \forall \varphi \in C^{1}_c(\Omega), \forall i=1,...,n. $$
Can every weakly differentiable function be restricted to an open set $\tilde{\Omega}$ such that $u$ is differentiable (classical derivative!) with $m(\Omega - \tilde{\Omega})=0$?
I know this is true for dimension $1$. Is it true for every $\Omega \subset \mathbb{R}^n$?
REPLY [7 votes]: This is not true for dimension $n>1$. In the book "Some Applications of Functional Analysis in Mathematical Physics" by Sobolev the following example is constructed. Consider function $\varphi(x,y)=f_1(x)+f_2(y)$, where $f_1$ and $f_2$ are continuous on $\mathbb{R}$ nowhere differentiable functions. Then $\varphi$ doesn't have strong (classical) derivatives, but weak derivative $\frac{\partial^2\varphi}{\partial x\partial y}$ exists and equals $0$ on every $\Omega=(a,b)\times(c,d)$. Indeed,
$$
\int\limits_\Omega\varphi\frac{\partial^2\psi}{\partial x\partial y}d\mu
=
\int\limits_\Omega f_1(x)\frac{\partial^2\psi}{\partial x\partial y}d\mu
+
\int\limits_\Omega f_2(y)\frac{\partial^2\psi}{\partial x\partial y}d\mu.
$$
But $\frac{\partial\psi}{\partial x}=\frac{\partial\psi}{\partial y}=0$ on the border $\overline{\Omega}\setminus\Omega$ of $\Omega$, so
$$
\int\limits_{\Omega}f_1(x)\frac{\partial^2\psi}{\partial x\partial y}d\mu
=
\int\limits_a^bf_1(x)\int\limits_c^d \frac{\partial^2\psi}{\partial x\partial y}dy\, dx
=
\int\limits_a^bf_1(x) \frac{\partial\psi}{\partial x}\Bigg|_{y=c}^{y=d} dx =0
$$
and the same way we get
$$
\int\limits_{\Omega}f_2(y)\frac{\partial^2\psi}{\partial x\partial y}d\mu = 0.
$$
Therefore
$$
\int\limits_{\Omega}\varphi\frac{\partial^2\psi}{\partial x\partial y}d\mu = 0.
$$<|endoftext|>
TITLE: n-th roots of unity summing to $0$
QUESTION [5 upvotes]: Let $\zeta = e^{2\pi i/n}$ be an $n$-th root of unity, and let $S = \{\zeta^m|m=0,1,\ldots,n-1\}$ be the corresponding sets of all $n$-th roots of unity.
Let $k \leq z$. Let $C \subseteq S$ such that $k=|C|$.
I made following conjecture, but so far I'm unable to prove it:
Then $\sum_{c\in C} c = 0$ implies that $k= |C|$ is a $\mathbb Z$-linear combination of strict divisors (divisors strictly greather than 1) of $n$.
This seems to be plausible, and I checked it up to $n=15$. For $n=15$ we have the interesting case that the converse does not hold for $k=11 = 1\cdot 5 + 2 \cdot 3$. Another observation we can use is that for $C \subset S$ we have the equivalence $$\sum_{c \in C} c = 0 \iff \sum_{d \in S \setminus C} d= 0$$ which is quite obvious when you consider that $\sum_{s\in S} s = 0$.
So can anyone prove or disprove this conjecture?
REPLY [6 votes]: This is true, in fact slightly more is true, namely that it is a combination with positive coefficients (which is likely intended). If one allows repetitions of the roots, then the converse is true too.
This is a consequence of the main result of the following paper.
T. Y. Lam and K. H. Leung, MR 1736695 On vanishing sums of roots of unity, J. Algebra 224 (2000), no. 1, 91--109.
Below is its abstract:
An unsolved problem in number theory asked the following: For a given natural number $m$, what are the possible integers $n$ for which there exist $m$-th roots of unity $\alpha_1, \dots, \alpha_n \in \mathbb{C}$ such that $\alpha_1 + \dots + \alpha_n=0$? We show in this paper that the set of all possible $n$'s is exactly the collection of $\mathbb{N}$-combinations of the prime divisors of $m$, where $\mathbb{N}$ denotes the set of all non-negative integers. The proof is long and involves a subtle analysis of minimal vanishing sums of mth roots of unity, couched in the setting of integral group rings of finite cyclic groups. Our techniques also recovered with ease some of the classical results on vanishing sums of roots of unity, such as those of Rédei, de Bruijn, and Schoenberg.
Note that they allow repetitions of the roots. But for the direction you ask about this no problem.<|endoftext|>
TITLE: Family of partitions, s.t. the quadratic variation of a BM diverges a.s.
QUESTION [6 upvotes]: This question is about a specific step in the solution of exercise 1.13 a) of the book "Brownian Motion" by Peres and Mörters (https://www.stat.berkeley.edu/~peres/bmbook.pdf). The exercise is on page 40 and its solution on page 315. $B$ stands for a Brownian Motion.
We need to show that, almost surely, there exists a family
$0=t_0^{n} \le t_1^{n} \le ... \le t_{p(n)}^n=t$ of (random) partitions, such that $\lim\limits_{n \to \infty} \sum\limits_{j=1}^{p(n)} \left(B\left(t_j^{(n)} \right) - B\left(t_{j-1}^{(n)} \right) \right)^2 = \infty$.
The solution gives the arguments, that for given $M>0$ large, for any fixed $s\in [0,1]$, there exists $n \in \mathbb{N}$, such that the dyadic interval $I(n,s):=[k2^{-n},(k+1)2^{-n}]$ containing $s$ satisfies $\left |B\left((k+1)2^{-n}\right)-B\left(k2^{-n}\right)\right| \geq M2^{-\frac{n}{2}}$ (call it inequality 1) and call $N(s)$ the smallest integer $n$ for which this inequality holds. We think that $k$ depends on $s$, so we write $k(s)$.
The solution tells us to apply Fubini's theorem to see that $N(s) < \infty$ almost surely. My question is now, whether we applied Fubini correctly.
We thought that (call it remark 1)
$N(s) < \infty$ almost surely $\thinspace$ $\Leftrightarrow \thinspace \int_{0}^{1} \mathbb{P} \left( \underset{n \ge 0}\bigcup \left\{ \left| B \left( \frac{k(s)+1}{2^n}\right)-B \left( \frac{k(s)}{2^n}\right)\right| \geq M2^{-\frac{n}{2}}\right\}\right) \mathrm{d}s < \infty$.
Then to apply Fubini, we write
\begin{align*}&\int_{0}^{1} \mathbb{P} \left( \underset{n \ge 0}\bigcup \left\{ \left| B \left( \frac{k(s)+1}{2^n}\right)-B \left( \frac{k(s)}{2^n}\right)\right| \geq M2^{-\frac{n}{2}}\right\}\right) \mathrm{d}s \\&=
\int_{0}^{1} \int_{\Omega} \Bbb{1}_{\left\{ \underset{n \ge 0}\bigcup \left\{ \left| B \left( \frac{k(s)+1}{2^n}\right)-B \left( \frac{k(s)}{2^n}\right)\right| \geq M2^{-\frac{n}{2}}\right\} \right\}} \mathrm{d}\mathbb{P}(\omega) \mathrm{d}s \\
&= \int_{\Omega} \int_{0}^{1} \Bbb{1}_{\left\{ \underset{n \ge 0}\bigcup \left\{ \left| B \left( \frac{k(s)+1}{2^n}\right)-B \left( \frac{k(s)}{2^n}\right)\right| \geq M2^{-\frac{n}{2}}\right\} \right\}} \mathrm{d}s \mathrm{d}\mathbb{P}(\omega)\end{align*}
Now, since we know that there exists $n$, such that inequality 1 is fulfilled, the indicator function is equal to 1, hence the last expression is equal to $\int_{\Omega} \int_{0}^{1} \mathrm{d}s \mathrm{d}\mathbb{P}(\omega)=1<\infty$ which implies due to remark 1 that $N(s)< \infty$.
Can anybody say something especially about the correctness of remark 1 and whether you think the author meant that by using Fubini's theorem? Thanks for any comments.
REPLY [2 votes]: $N(s) < \infty$ almost surely $\thinspace$ $\Leftrightarrow \thinspace \int_{0}^{1} \mathbb{P} \left( \underset{n \ge 0}\bigcup \left\{ \left| B \left( \frac{k(s)+1}{2^n}\right)-B \left( \frac{k(s)}{2^n}\right)\right| \geq M2^{-\frac{n}{2}}\right\}\right) \mathrm{d}s < \infty$
Note that the condition on the right-hand side is trivially satisfied since any bounded (measurable) function is integrable with respect to a finite measure. So, using $0 \leq \mathbb{P}(A) \leq 1$ for any measurable set $A$, it is obvious that the integral on the right-hand side is finite. However, I don't see why this is useful to prove $N(s)<\infty$.
Now, since we know that there exists $n$, such that inequality 1 is fulfilled
If you already know that there exists such $n$, then there is nothing to prove since $N(s) \leq n$ implies $N(s)<\infty$.
One possibility to prove $N(s)<\infty$ almost surely is the following: We have
$$N(s,\omega) = \infty \iff \forall n \in \mathbb{N}: |B((k+1) 2^{-n},\omega)-B(k2^{-n},\omega)| < M 2^{-n/2}.$$
Using the independence and stationarity of the increments, we find
$$\begin{align*} \mathbb{P}(N(s)=\infty) &= \mathbb{P} \left( \bigcap_{n \geq 1} \{|B((k+1) 2^{-n})-B(k2^{-n})| < M 2^{-n/2}\} \right) \\ &= \lim_{k \to \infty} \prod_{n =1}^k \mathbb{P}(|B((k+1) 2^{-n})-B(k2^{-n})| < M 2^{-n/2}) \\ &= \lim_{k \to \infty} \prod_{n =1}^k \mathbb{P}(|B(2^{-n})|0$, we get
$$\mathbb{P}(N(s)=\infty) = \lim_{k \to \infty} \prod_{n =1}^k \mathbb{P}(|B_1|0$. This proves $N(s)<\infty$ almost surely.<|endoftext|>
TITLE: Which theorems of classical mathematics cannot be proved without using the law of excluded middle?
QUESTION [7 upvotes]: The law of excluded middle is a logical principle that says that for any sentence $A$, the sentence $A\lor\,\neg A$ is true. This is a valid law of classical logic, but is rejected by intuitionistic logic.
However, for some the proofs of mathematical theorems that use the law of excluded middle, there exists an alternative proof of the theorem that does not use the law of excluded middle.
Is there any theorem of classical mathematics that cannot be proved without using the law of excluded middle?
REPLY [3 votes]: There are quite a few other common theorems we cannot prove without some level of classical logic (but which do have constructive analogs).
Analysis
For example, the following two theorems cannot be proved in constructive logic:
The Cauchy Sequence real numbers form a complete metric space.
The Cauchy Sequence and Dedekind Cut definitions of the real numbers are equivalent.
To prove the above two facts, it suffices to assume the "very weak choice principle". This very weak choice principle states: suppose that $P$ and $Q$ are predicates on $\mathbb{N}$. If $\forall n \in \mathbb{N}, P(n) \lor Q(n)$, then there is some function $f : \mathbb{N} \to \{0, 1\}$ such that for all $n \in \mathbb{N}$, if $f(n) = 0$ then $P(n)$, and if $f(n) = 1$ then $Q(n)$. This fact is easy to prove using classical logic - define $f(n) = 0$ if $P(n)$, and $1$ otherwise. Many constructivists assume this axiom (or a stronger version such as countable choice). The three major schools of constructivism - Bishop's school, Markov's school, and Brouwer's school - all assume a version of countable choice strong enough to imply the above.
Other examples include the intermediate value theorem (see ryan221b's answer) and the mean value theorem. There is an approximate version of the mean value theorem; as far as I can tell, it appears to require the very weak choice principle mentioned above.
Finally, a kicker: without classical logic, it's impossible to prove
Not all functions $\mathbb{R} \to \mathbb{R}$ are continuous.
It's perfectly consistent with constructive logic that all such functions are continuous - in fact, any such function that can be defined constructively can be constructively proved to be continuous.
Cardinality
Another example of a theorem which cannot be proved constructively is the Schroder-Bernstein theorem, which states that if there is an injection $A \to B$ and an injection $B \to A$, one can find a bijection between $A$ and $B$. This theorem is actually equivalent to the Law of Excluded Middle!
The fact that every subset of a finite set is finite is equivalent to Excluded Middle (in fact, even saying that every subset of a 1-element set is finite is equivalent to Excluded Middle).
The fact that every set whose elements can be finitely enumerated is finite is also equivalent to Excluded Middle (in fact, it suffices to consider sets of the form $\{x_1, x_2\}$).
More starkly, without the law of excluded middle, one cannot rule out the existence of an injective function $\mathbb{N}^\mathbb{N} \to \mathbb{N}$. This really throws the whole idea of "cardinality as size" out the window.
Algebra
Plenty of results in algebra rely on the axiom of choice. Since this axiom in turn proves Excluded Middle, these results are not constructive. Examples of such theorems are that every vector space has a basis, or that every nonzero ring has a maximal ideal.
Other examples which are equivalent to excluded middle include
$\mathbb{Z}$ is a Principle Ideal Domain
Every subspace of a finite-dimensional $\mathbb{R}$-vector space is finite-dimensional
Every quotient of a finite-dimensional $\mathbb{R}$-vector space is finite-dimensional
Other unprovable statements which are weaker than excluded middle but still can't be proven include
Every $\mathbb{R}$-matrix can be put into row-reduced echelon form
Every square $\mathbb{R}$-matrix is either invertible or not
However, much of algebra is constructive. For example, an identity holds for all groups (or all rings, or all monoids, etc.) if and only if it can be proved constructively from the group axioms.
Number Theory + Computability Theory
There are actually quite a few statements in number theory which are equi-provable under classical and constructive logic. In particular, any intuitionistically $\Pi_2$ statement is provable constructively if and only if it's provable classically. A $\Pi_2$ statement is one which can be phrased as $\forall m \exists n P(n, m)$, where all the quantifiers occurring in $P$ are bounded (meaning that they are of the form $\forall a \leq b$ or $\exists a \leq b$). This includes many open problems like the twin primes conjecture, the $3n + 1$ conjecture, the Goldbach conjecture, Landau's conjecture, Schinzel's hypothesis, Legendre's conjecture, the weak Bunyakovsky conjecture, $P \neq NP$, and more.
However, there are some statements which are not provable constructively.
For every unary primitive recursive function $f : \mathbb{N} \to \mathbb{N}$, either there is some $n$ such that $f(n) = 0$, or for all $n$, $f(n) \neq 0$.
Not all functions $f : \mathbb{N} \to \mathbb{N}$ are computable.
In fact, any function $\mathbb{N} \to \mathbb{N}$ which can be defined constructively can also be constructively proved to be computable.<|endoftext|>
TITLE: Smallest future date which involves no repetition of a digit in the format DD/MM/YYYY
QUESTION [12 upvotes]: What is the smallest future date which involves no repetition of a digit in the format DD/MM/YYYY for the year? What is your approach?
REPLY [2 votes]: If we require the date and month to use both digits without an initial zero, then we need to wait until October 27, 3456 (27/10/3456 Europe, 10/27/3456 USA). The constraints are: we must use 0, 1, 2 in the month and date (the month has two of those digits and the date can't have both digits greater than 2, then we use lowest available digit for the millenium, then the century, etc.<|endoftext|>
TITLE: Is a bijective smooth function a diffeomorphism almost everywhere?
QUESTION [12 upvotes]: Suppose I have $f: M \rightarrow N \in C^{\infty}$ a smooth bijection between $n$-dimensional smooth manifolds. Does it have to be a diffeomorphism except for a set of measure 0?
I think the proof might come from showing that $X = \{p: d_pf \text{ is not an isomorphism}\}$ has measure zero. Using the inverse function theorem you can show that the statement follows from this. By Sard's theorem, we know that $f(X)$ has measure zero, but I don't know how to go from there to $X$ having measure zero (since we don't know, for example, that $f^{-1}$ is locally Lipschitz).
You may assume (if you want) that $M$ and/or $N$ are connected and/or compact.
Thanks!
REPLY [11 votes]: I think the answer is no. Suppose $E\subset \mathbb R$ is closed, has positive measure, and has no interior (for example, $E$ could be the complement of an open set of small measure containing the rationals).
As is well known, there exists a $C^\infty$ function $f: \mathbb R\to [0,\infty)$ such that $f=0$ on $E$ and $f>0$ on $\mathbb R \setminus E.$ Define
$$F(x) = \int_0^x f(t)\, dt.$$
If $x0,$ hence $F(y) > F(x).$ Thus $F,$ which is $C^\infty,$ is strictly increasing, hence is a bijection onto $F(\mathbb R),$ a nice open interval. But $F'(x) = f(x)$ everywhere. Since $f= 0$ on $E,$ $F$ fails to be a local diffeomorphism at each point of $E,$ a set of positive measure.<|endoftext|>
TITLE: What to keep in mind when attempting proof of basic properties of divisibility/what techniques are useful/what's the intuition for showing them?
QUESTION [7 upvotes]: So I am currently trying to prove some basic divisibility relations, as follows.
If $a \mid b$ and $a \mid c$, then $a \mid b + c$.
If $a \mid b$ and $s \in \mathbb{Z}$, then $a \mid sb$.
If $a \mid b$ and $a \mid c$ and $s$, $t \in \mathbb{Z}$, then $a \mid sb + tc$.
If $a \mid b$ and $b \mid c$, then $a \mid c$.
$a \mid 0$ for all $a \neq 0$.
$1 \mid b$ for all $b \in \mathbb{Z}$.
If $a \mid b$ and $b \neq 0$, then $|a| \le |b|$.
If $a \mid b$, then $\pm a \mid \pm b$.
I frequently find myself having trouble showing these quite basic facts.
What should I keep in mind when trying to prove these properties, i.e. what techniques are useful?
What is the intuition for the proofs of these facts, or rather, morally why must these facts be true?
Thanks in advance.
REPLY [2 votes]: Good question.
It may be easiest to prove some of these facts straight from the definition. That is, recall that $a|b\implies \exists k\in\mathbb{Z}$ such that $ak=b$. For your first property, we have $ak=b$ and $al=c$ for some $k,l\in\mathbb{Z}$.
When we add those two together we find $ak+al=b+c$. Then, by distributivity, $(k+l)a=b+c$. Since $k+l\in\mathbb{Z}$, we have that $a|(b+c)$ by definition.
Hopefully that provides some framework to prove some of these other statements since many amount to simple algebraic manipulation once you apply the definition of "divides."<|endoftext|>
TITLE: Determine all functions $f$ on $\mathbb R$ such that $f(x^2+yf(x))=f(x)f(x+y)$ for all $x,y$
QUESTION [11 upvotes]: Find all functions $f: \mathbb R \rightarrow \mathbb R$ such that
$$f(x^2+yf(x))=f(x)f(x+y). $$ for all $x,y$ real numbers.
I think that the only three solutions are: $f(x)=0$, $f(x)=1$ and $f(x)=x$.
I would appreciate any suggestions.
REPLY [3 votes]: Remark (1). Let $f:\mathbb{R} \rightarrow \mathbb{R}$ is a solution of given functional equation such that $f$ is non constant function with $f(1)=1$. Define
$$Fix(f)=\{ x>0\ : \ f(x)=x \}.$$
$(1)$ For any $x\in Fix(f)$ and any $y\in \mathbb{R}$
$$f(xy)=xf(y).$$
$(2)$ $Fix(f)$ is a subgroup of the multiplicative group of positive real numbers.
$(3)$ If $Fix(f)\neq \{1\}$, then the set $Fix(f)$ is dense in $\mathbb{R}^+$. If $f$ be a continuous function, then $$f(x)=x$$ for all $x\in \mathbb{R}$.
Proof. Letting $y:=y-x$ in functional equation, we have
$$f(x^2+(y-x)f(x))=f(x)f(y)\ \ \ \ \ \ \ (*)$$
for all $x, y\in \mathbb{R}$. Also Substituting $x:=0$ in functional equation, we have
$$f(x^2)=f(x)^2\ \ \ \ \ \ \ (**)$$
for all $x\in \mathbb{R}$. This implies that the function $f$ is positive on non-negative reals.
$(1)$ Let $x\in Fix(f)$, then $f(x)=x$ and form $(*)$
$$f(x^2+(y-x)f(x))=f(xy)=xf(y)$$
for any $y\in \mathbb{R}$.
$(2)$ Since $f(1)=1$, so $1\in Fix(f)$. Now if $x, y\in Fix(f)$, we have form $(1)$
$$f(xy)=xf(y)=xy,$$
and so $xy\in Fix(y)$. For any $x \in Fix(f)$, we have form $(1)$
$$1=f(1)=f(x*\frac{1}{x})=xf(\frac{1}{x})$$
and so $\frac{1}{x}\in Fix(f)$.
$(3)$ There is $x_0\neq 1$ such that $x_0\in Fix(f)$. Define the function $\phi:Fix(f) \rightarrow R$ as follow
$$\phi(x)=\ln (x)$$
for all $x\in Fix(f)$. Then $\phi(Fix(f))$ is an additive subgroup of real numbers. From $(**)$ we can show that
$$x_0^\frac{1}{2^n} \in Fix(f)$$
for all natural $n$. Therefore the additive subgroup $\phi(Fix(f))$ can not be a discrete and it must be dense in $\mathbb{R}$. This implies that $Fix(f)$ is dense on the positive real line.
Now $f$ be a continuous function, then $Fix(f)=\mathbb{R}^+$ and from $(**)$, we get that
$$f(x)^2=x^2$$
and also letting $y=0$ in $(*)$, we have
$$f(x^2-xf(x))=0$$
for all $x\in \mathbb{R}$. Therefore $f(x)=x$ for all $x\in \mathbb{R}$.
Update:
Counterexample: Let $f:\mathbb{R} \rightarrow \mathbb{R}$ is a function as follow:
$$f(x)=\begin{cases} x\ \ \text{ if }x\in F\\ 0\ \ \ \text{ if }x\in N\end{cases}$$
for all $x\in \mathbb{R}$, in which $N$ is the set of real numbers which are transcedental over $\mathbb{Q}$ and $F$ be the set of real numbers which are algebraic over $\mathbb{Q}$. Then the function $f$ is a solution of given functional equation.
Proof. You can show that $F\cap N=\emptyset$, $F.N=N$ and $F+N=\mathbb{R}$. We have the functional equation as follow (from
$(*)$):
$$f(x^2+(y-x)f(x))=f(x)f(y)\ \ \ \ \ \ \ (*)$$
for all $x, y\in \mathbb{R}$. Now let $x\in N$, then $x^2\in N$ and so
$$f(x^2+(y-x)f(x))=f(x^2)=f(x)^2=0=f(x)f(y)=0$$
for all $y\in \mathbb{R}$. In other case, let $x\in F$, we have
$$f(x^2+(y-x)f(x))=f(x^2+(y-x)x)=f(xy)=^? xf(y) $$
for all $y\in \mathbb{R}$. If $y\in F$, then $xy\in F$ and
$$xy=f(xy)=xf(y)=xy.$$
If $y\in N$, then $xy\in N$ and
$$0=f(xy)=xf(y)=0.$$
Therefore the function $f$ is a solution of the functional equation $(*)$ and proof is done.
Remark (2). Let $f:\mathbb{R} \rightarrow \mathbb{R}$ is a solution of given functional equation such that $f$ is non constant function with $f(1)=1$. Define
$$N(f)=\{ x>0\ : \ f(x)=0 \}.$$
$(1)$ $N(f)\cap Fix(F)=\emptyset$ and
$$N(f). Fix(F)=N(f)$$
$(2)$ Let $Fix(f)\neq \{1\}$, then $f(x)=x$ if only if $N(f)=\emptyset$.
Questioner had guessed that the function $f(x)=x$ is only non constant solution of functional equation. I try to proved it i.e., if $Fix(f)\neq \{1\}$ then $f(x)=x$. In this regard, I asked a question here, the answere helped to find a counterexample.<|endoftext|>
TITLE: Making sense of the commutator
QUESTION [12 upvotes]: For a group $G$, the commutator of two elements is defined as $[a,b]=aba^{-1}b^{-1}$, and is usually said to measure the extent to which the elements $a$ and $b$ fail to commute.
I'm having some trouble making sense of the last bit: I understand that if $a$ and $b$ commute, then $[a,b]=e$. But if $a$ and $b$ don't commute, in what sense is the commutator actually capturing the extent of their failure to commute, since there is no way to talk about how "far" an element $g\in G$ is from the identity?
Am I just interpreting the word "measure" too literally here, or is there actually a way to think about commutators that makes it clear in what sense they compare the way two pairs of elements fail to commute?
REPLY [6 votes]: We can't really say "how non-commutative" $a$ and $b$ are, without some corresponding notion of "how much not the identity" any given element of a group $G$ is, as you point out. For some groups, we may be able to do this, but in general, there's no "universal" way.
The real value in this, is not the individual commutators $[a,b]$, but rather the commutator subgroup $[G,G]$. It should be clear $G/[G,G]$ is abelian, for:
$(x[G,G])(y[G,G])(x[G,G])^{-1}(y[G,G])^{-1} = [x,y][G,G] = e[G,G] = [G,G]$
But the story doesn't end there, if $N$ is any normal subgroup such that $G/N$ is abelian, we have $[G,G] \subseteq N$. The reason is very plain:
if for any $x,y \in G$, we have $(xN)(yN) = xyN = yxN = (yN)(xN)$, then we must have $xy(yx)^{-1} = [x,y] \in N$ for any pair $x,y \in G$. Thus $[G,G]$ is minimal among all normal subgroup $N$ that make $G/N$ abelian.
Another nice thing about this, is that the way we do it doesn't really depend on the group $G$ in the following sense: if $\phi:G \to H$ is a group homomorphism, we get a group homomorphism $\tilde{\phi}:G/[G,G] \to H/[H,H]$ of abelian groups defined by:
$\tilde{\phi}(x[G,G]) = \phi(x)[H,H]$, since a homomorphism preserves commutators:
$\phi([x,y]) = [\phi(x),\phi(y)]$.<|endoftext|>
TITLE: Using higher order derivatives
QUESTION [5 upvotes]: I am currently learning about the general Notion of Differentiability. I came across some difficulties when working with higher order derivatives and I am hoping for confirmation or comments on some questions I have.
In the following, let $E$, $F$ be Banach Spaces, and let $X\subseteq E$ be open.
I do understand that for $x_0\in X$ it is $Df(x_0)\in\mathcal{L}(E,F)$.
My directional derivative for $v\in E\setminus\{0\}$ is defined as the derivative in $0$ of the function $(-\varepsilon,\varepsilon)\to F, t\mapsto f(x_0+tv)$ with $\varepsilon>0$ suitable to keep $x_0\pm\varepsilon v$ in $X$.
So it is $D_vf(x_0)\in\mathcal{L}(\mathbb{R},F)$. When then stating $D_vf(x_0)=Df(x_0)v$ while $Df(x_0)v\in F$, are we already using the identification $\mathcal{L}(\mathbb{R},F)\cong F$?
When extending the notion to higher order derivatives $D^kf(x_0)\in\mathcal{L}^k(E,F)$ I came across the statement $$D^kf(x_0)(h_1,\dots,h_k)=D(\dots D(Df(x_0)h_1)h_2\dots)h_k$$
that should somehow be linked to the above identification and that I really cannot wrap my head around.
It would be nice to see some step-by-step computation of that formula. Should I read myself more into multi-linear maps?
Thanks in advance for any comment.
REPLY [7 votes]: For the first part of your question (relationship between Gauteaux (directional) and Frechet (overall) derivatives), let's break it down step-by-step. The Frechet derivative
$$Df:E \rightarrow \mathcal{L}(E,F)$$
is a nonlinear function that takes as input a point in $E$ at which to make an affine approximation of $f$, and outputs a linear map from $E$ to $F$ corresponding to this local affine approximation. Therefore, evaluating the Frechet derivative at a point,
$$Df(x_0)\in\mathcal{L}(E,F),$$
yields a linear map from $E$ to $F$. Finally, applying this linear map to a vector,
$$Df(x_0)v \in F,$$
yields a vector in $F$.
On the other hand, the Gauteaux derivative of $f$ at point $x_0$ in direction $v$ is already in $F$, since it is defined by the limit of finite differences, which are in $F$:
$$D_v ~f(x_0) := \lim_{s \rightarrow 0} \frac{\overbrace{f(x_0 + sv)}^{\in F} - \overbrace{f(x_0)}^{\in F}}{s}.$$
So, there is no need to invoke any special identifications here.
For the second part of your question (higher derivatives), it is helpful to keep in mind that the space $\mathcal{L}(X,Y)$ of linear maps from one Banach space $X$ to another $Y$, is itself a Banach space (under the induced operator norm). Therefore the first derivative,
$$Df:E \rightarrow \mathcal{L}(E,F),$$
is just a nonlinear function mapping between Banach spaces, with the same domain as $f$, but different range (the range being a space of linear operators, $\mathcal{L}(E,F)$). To get the second derivative of $f$, we just take the first derivative of $Df$:
$$D^2 f:= D(Df),$$
$$D^2 f: E \rightarrow \mathcal{L}(E,\mathcal{L}(E,F)).$$
For even higher derivatives you can see where this is going:
\begin{align}
f:&E \rightarrow F \\
Df:&E \rightarrow \mathcal{L}(E, F) \\
D^2f := D(Df):&E \rightarrow \mathcal{L}(E, \mathcal{L}(E, F)) \\
D^3f := D(D^2f):&E \rightarrow \mathcal{L}(E,\mathcal{L}(E, \mathcal{L}(E, F))) \\
D^4f := D(D^3f):&E \rightarrow \mathcal{L}(E,\mathcal{L}(E,\mathcal{L}(E, \mathcal{L}(E, F)))) \\
\dots
\end{align}
The result that prevents this from becoming an ugly mess is the following isomorphism:
$$\mathcal{L}^n(E,\mathcal{L}^m(E,F)) \cong \mathcal{L}^{n+m}(E,F),$$
where $\mathcal{L}^k(X,Y)$ denotes the space of $k$-multilinear functions from $X$ to $Y$ (functions taking in $k$ input vectors from $X$, outputs a vector in $Y$, and is independently linear in each input), and $\cong$ denotes an isometric isomorphism of Banach spaces. Applying this isomorphism recursively allows us to express the higher derivatives in terms of multilinear maps as follows:
\begin{align}
f:&E \rightarrow F \\
Df:&E \rightarrow \mathcal{L}(E, F) \\
D^2f:&E \rightarrow \mathcal{L}^2(E, F) \\
D^3f:&E \rightarrow \mathcal{L}^3(E, F) \\
D^4f:&E \rightarrow \mathcal{L}^4(E,F) \\
\dots
\end{align}
which is much nicer.
A good reference for all this can be found in chapter 9 of the course notes "Methods of Applied Mathematics" by Arbogast and Bona, available online here:
https://www.ma.utexas.edu/users/arbogast/appMath08c.pdf<|endoftext|>
TITLE: Stronger version of Acyclic Models Theorem
QUESTION [7 upvotes]: Let $\mathscr{C}$ be an abelian category. If $P_\bullet \in \operatorname{Ch}_{\geq 0}(\mathscr{C})$ is a bounded below complex of projectives, and $C_\bullet \in \operatorname{Ch}_{\geq 0}(\mathscr{C})$ is a bounded below exact complex, then $[P_\bullet, C_\bullet] = 0$. (Every chain map $P_\bullet \to C_\bullet$ is nullhomotopic.)
It is tempting to conjecture that
If $\mathscr{B} \underset{G}{\overset{F}{\rightrightarrows}}
\operatorname{Ch}_{\geq 0}(\mathscr{C})$ are functors from an
arbitrary category $\mathscr{B}$, where $F$ lands in projective
complexes, and $G$ lands in acyclic complexes, then $[F,G]=0$, meaning
that any natural transformation from $F \Rightarrow G$ is naturally
chain homotopic to the zero natural transformation.
The acyclic models theorem implies something similar: that if $\mathscr{B}$ has models $\mathcal{M}$, and $F$ is a free functor w.r.t. $\mathcal{M}$, and if $G$ is acyclic, then $[F,G]$ is indeed zero.
Is the highlighted theorem above untrue?
Given a natural transformation, I can choose a map $\tau: (FX)_\bullet \to (GX)_\bullet[1]$ for each object $X \in \mathscr{B}$. Making $\tau$ natural is the problem. If I try to define the first map $\tau_0$ in the natural transformation $\tau$, and check whether it is natural, I find that the naturality diagram
\begin{array}{}
FX_0 & \xrightarrow{(Ff)_0} &FY_0\\
(\tau_X)_0 \downarrow && \downarrow (\tau_Y)_0
\\
GX_1 & \xrightarrow{(Gf)_1} & GY_1
\end{array}
only commutes up to a boundary element in $\partial^{GY}(GY_2)$.
REPLY [2 votes]: I also came up with this question recently and made some try.
As far as I am concerned, the correct generalization of "free with basis in $\mathcal{M}$" is "projective with basis in $\mathcal{M}$".
Definition:
A functor $S: \mathcal{C}\longrightarrow Mod_{R}$ is said to be projective with basis in $\mathcal{M}$ if the following two conditions hold
$T(C)$ is projective for all $C\in\mathcal{C}$.
There is a $T$-model set $\chi=\{x_\lambda\in T(M_\lambda)\mid M_\lambda\in \Lambda\}$ s.t.
$$
\{T(g)(x_\lambda)|g\in \hom(M_\lambda,C), \lambda\in \Lambda\}
$$
is projective basis for $T(C)$. i.e.
For each $x\in T(C)$, it can be expressed as
$$
x=\sum_{\lambda\in \Lambda}\sum_{g\in \hom(M_\lambda,C)} f_{g,\lambda}^C(x)T(g)(x_\lambda)
$$
where $\{f_{q,\lambda}^C: T(C)\longrightarrow R\}$ is a fixed set of morphisms of $R$-modules.
A functor $S_\bullet:\mathcal{C}\longrightarrow Comp_R$, where $Comp_R$ is the category of chain complex of $R$-modules, is said to be projective with basis in $\mathcal{M}$ if each $S_n$ is projective with basis in $\mathcal{M}$.
And we can state a proposition:
Proposition:
Suppose $\mathcal{C}$ is a category with models $\mathcal{M}$. Suppose $T_\bullet, S_\bullet:\mathcal{C}\longrightarrow Comp_R$ are two functors such that both $T_\bullet$ and $S_\bullet$ are non-negative. Assume further $T_\bullet$ is projective with basis in $\mathcal{M}$ and $S_\bullet$ is acyclic in the positive degree on each element $M\in\mathcal{M}$.
Suppose
$$
\Theta: H_0\circ T_\bullet\longrightarrow H_0\circ S_\bullet
$$
is a natural transformation. $\exists $ a natural chain morphism $\Psi_\bullet:T_\bullet\longrightarrow S_\bullet$ which is unique up to natural chain homotopy and has $H_0(\Psi_\bullet)=\Theta$.
And this proposition seems to be a specialization of Theorem 1 in
Dold A., MacLane S., Oberst U. (1967) Projective classes and acyclic models. In: Reports of the Midwest Category Seminar. Lecture Notes in Mathematics, vol 47. Springer, Berlin, Heidelberg
Hope that helps.<|endoftext|>
TITLE: Non-existence of $C^1$ injective mapping $\mathbb{R}^3 \to \mathbb{R}^2$.
QUESTION [6 upvotes]: A friend of mine did a test yesterday where it asked to prove that there does not exist a $C^1$ injective mapping $\mathbb{R}^3 \to \mathbb{R}^2$.
This is an immediate result from invariance of domain, but since this is a real analysis test (where people are being introduced to derivation in $\mathbb{R}^n$), I tried to come up with an elementary solution. However, none came to mind.
I thought about using the local form of submersions (which, by the way, I wouldn't expect in this point in the course my friend is taking anyway), but we would need to have a regular value which is on the image, and this is not given by the hypotheses, neither by Sard's theorem.
Since this was on the test, I have the feeling I may be letting something slip. My question therefore is to prove the given statement with only tools of differentiation in $\mathbb{R}^n$ (inverse function theorem, chain rule etc).
REPLY [5 votes]: Here's the most elementary argument I can find. Suppose $f:\mathbb{R}^3\to\mathbb{R}^2$ is $C^1$. By lower semicontinuity of rank, we can find a nonempty open subset $U\subset\mathbb{R}^3$ on which $df$ has constant rank. If you know the constant rank theorem, you're done: since the constant rank of $df$ on $U$ is less than $3$, $f$ will not be injective on $U$. (Incidentally, if you are assuming that students were expected to know the inverse function theorem, it seems reasonable to me that they might also know the constant rank theorem.)
Without using the constant rank theorem, you can finish the proof as follows using the Peano existence theorem for ODEs (you can use the more standard Picard existence theorem if you know that $f$ is $C^2$). I will assume the constant rank of $df$ on $U$ is $2$; the other cases are similar. Fix a point $p\in U$ and let $u,v,w\in\mathbb{R}^3$ be vectors such that $df_p(u)=0$ and $df_p(v)$ and $df_p(w)$ are linearly independent. Shrinking $U$, we may assume that in fact $df_q(v)$ and $df_q(w)$ are linearly independent for all $q\in U$. Define $h:U\to\mathbb{R}^3$ by $h(q)=u+av+bw$, where $a$ and $b$ are the unique scalars such that $df_q(u+av+bw)=0$. Continuity of $df$ (and continuity of matrix inversion) implies that $h$ is continuous.
By the Peano existence theorem, there exists $g:(-\epsilon,\epsilon)\to U$ such that $g(0)=p$ and $g'(t)=h(g(t))$. Since $df_q(h(q))=0$ for all $q\in U$, we find that the derivative of $f\circ g$ vanishes identically. So $f$ is constant on the image of $g$. Since $h$ never vanishes, $g'$ never vanishes, so this image has more than one point. Thus $f$ is not injective.<|endoftext|>
TITLE: Basis-free formula for $\mathrm{Hom}_k(V,V)\rightarrow V^*\otimes V$
QUESTION [5 upvotes]: Let $V$ be a finite dimensional vector space over a field $k$. Then there is a natural map $\phi:V^*\otimes V\rightarrow \mathrm{Hom}_k(V,V)$ given by $$\phi:f\otimes v\mapsto \Big(x\mapsto f(x)v\Big)$$ (extended by linearity). It's not hard to check that this is well-defined and injective, hence surjective by a dimension count.
The above suffices to define a (canonical) map $\phi^{-1}\colon\mathrm{Hom}_k(V,V)\rightarrow V^*\otimes V$.
Explicitly, $\phi^{-1}(g)$ is the unique element of $V^*\otimes V$ that maps to $g$ under $\phi$.
This definition of $\phi$ is basis-free, but it does not seem to give a formula for $\phi^{-1}$ in the same sense that the displayed equation gives a formula for $\phi$.
Question 1: How should I make precise the notion that the displayed equation for $\phi$ counts as a "formula", but the definition of $\phi^{-1}$ does not?
Question 2: Is there some alternate basis-free definitionof $\phi^{-1}$ that clearly would count as a formula?
Comment: It is easy to write down a basis-dependent formula for $\phi^{-1}$ and then prove that the result of this formula is independent of the choice of basis. But I'm looking for a formula that never requires picking a basis in the first place.
REPLY [2 votes]: Strictly speaking, it is not true that the definition of $\mathrm{Hom}_k(V,W)\xrightarrow{\phi^{-1}} V^*\otimes W$ based on surjectivity of the injective $V^*\otimes W\xrightarrow{\phi}\mathrm{Hom}_k(V,W)$ is a basis-free definition.
This is because the surjectivity is equivalent to the assertion "$W$ is finite-dimensional", which is secretly the assertion "$W$ has a finite basis".
Here I am distinguishing between basis-free and basis-independent: the former can be defined without reference to a basis, the latter produces something out of a basis, but applied to any basis produces the same thing (although the axiom of choice obscures the distinction by giving every vector space a basis, thus effacing the wider applicability of basis-free definitions).
With that in mind, when you say that $\phi^{-1}(g)$ is the "unique element" of $V^*\otimes W$ that maps to $g\in\mathrm{Hom}_k(V,W)$ under $\phi$, what you're really doing is giving a basis-independent definition of $g$, because that's what dimension-counting does (produces elements from a basis independently of the basis you start with). Accordingly, there are always formulas associated with basis-independent definitions, it's just that the formulas have an additional parameter that is some basis, whereas basis-free definitions give formulas that depend solely on the data of the vector spaces.
So I doubt that there is a basis-free formula for $\phi^{-1}$ because the existence of $\phi^{-1}$ is equivalent to an assertion of finite-dimensionality, hence to the existence of some finite basis. Furthermore, I don't know that injectivity of $V^*\otimes W\xrightarrow{\phi}\mathrm{Hom}_k(V,W)$ has a basis-free proof, which would mean that $\phi^{-1}$ cannot even exist, let alone be defined, without a basis for $W$.
Also, the finite-dimensionality is almost a red herring. The correct thing to do is to describe the image of $V^*\otimes W\xrightarrow{\phi}\mathrm{Hom}_k(V,W)$ as consisting of those linear maps $V\to W$ whose image is finite-dimensional. Then any choice of basis $\{w_i\}$ for $W$ allows you to define $\phi^{-1}$ as follows. By definition, a $V\xrightarrow{g}W$ in the image of $V^*\otimes W\xrightarrow{\phi}\mathrm{Hom}_k(V,W)$ has finite-dimensional image, hence can be written (uniquely) as (jointly finite) sums $g(v)=\sum_i w_i^*(g(v)))w_i$ where $\{w_i^*\}\subset W$ are the dual vectors to the basis $\{w_i\}$ given by $w_i^*(w_j)=\delta_{ij}=\begin{cases}1&i=j\\0&i\neq j\end{cases}$. This says exactly that $\phi^{-1}(g)=\sum_i(w_i^*\circ g)\otimes w_i$, which is a perfectly good, unambiguous formula, albeit one that depends on the basis $\{w_i\}$.
Note that when I write $\phi^{-1}$, I mean that $\phi\circ\phi^{-1}=\mathrm{id}$. Indeed, $\phi(\sum_i(w_i^*\circ g)\otimes w_i)(v)=\sum_iw_i^*(g(v))w_i=g(v)$. We need to check that $\phi^{-1}\circ\phi=\mathrm{id}_{V^*\otimes W}$ to conclude we have an isomorphism (dimension-counting doesn't work because $V^*\otimes W$ could be infinite-dimensional). Since an arbitrary element in the image of $\phi$ is of the form $\phi(\sum_j f_j\otimes w_j)=\sum f_j\cdot w_j$ where $\{f_j\}\subset V^*$ is some set of linear functionals, we indeed have $\phi^{-1}(\sum f_j\cdot w_j)=\sum_iw_i^*\circ(\sum f_j\cdot w_j))\otimes w_i=\sum_if_i\otimes w_i$.<|endoftext|>
TITLE: How to find a Cartan subalgebra of $so(3)$.
QUESTION [9 upvotes]: Let $so(3)$ be the Lie algebra given by
$$
so(3) = \{X \in \text{Mat}_{3 \times 3}: X^T = - X \}.
$$
Here $\text{Mat}_{3 \times 3}$ is the set of all $3 \times 3$ matrices and $X^T$ is the transpose of $X$. How to find a Cartan subalgebra of $so(3)$? Thank you very much.
REPLY [15 votes]: Using the standard basis of $\mathfrak{so}(3)$, given by
$$
e_1=\begin{pmatrix} 0 & 1 & 0 \cr -1 & 0 & 0 \cr 0 & 0 & 0 \end{pmatrix},\;
e_2=\begin{pmatrix} 0 & 0 & 1 \cr 0 & 0 & 0 \cr -1 & 0 & 0 \end{pmatrix},\;
e_3=\begin{pmatrix} 0 & 0 & 0 \cr 0 & 0 & 1 \cr 0 & -1 & 0 \end{pmatrix},
$$
the Lie brackets are given by commutator, i.e.,
$$
[e_1,e_2]=-e_3,\;[e_1,e_3]=e_2,\;[e_2,e_3]=-e_1.
$$
A Cartan subalgebra of a simple (complex) Lie algebra is given by a maximal abelian Lie subalgebra $H$ consisting of diagonalizable elements. Now it is easy to see that every subalgebra of dimension $k>1$ of $\mathfrak{so}(3)$ is not abelian. Hence a Cartan subalgebra $H$ must have dimension $1$. The difficulty now is to find a diagonalizable element, generating $H$. This depends on the field. If the field is algebraically closed of characteristic zero, say $\mathbb{C}$, then each basis elements is diagonalizable. Over the real numbers one usually tries to avoid this difficulty and passes to another real Lie algebra, with is isomorphic to $\mathfrak{so}_3(\mathbb{R})$, namely the Lie algebra $\mathfrak{su}(2)$, with the Pauli matrices as basis. Then $H=\langle \sigma_3\rangle$, see here. Using the isomorphism $\phi\colon \mathfrak{su}(2)\rightarrow \mathfrak{so}_3(\mathbb{R})$ we obtain a Cartan subalgebra for $\mathfrak{so}_3(\mathbb{R})$.<|endoftext|>
TITLE: Imaginary Golden Ratio
QUESTION [10 upvotes]: While playing with the results of defining a new operation, I came across a number of interesting properties with little literature surrounding it; the link to my original post is here: Finding properties of operation defined by $x⊕y=\frac{1}{\frac{1}{x}+\frac{1}{y}}$? ("Reciprocal addition" common for parallel resistors)
and as you can see, the operation of interest is $x⊕y = \frac{1}{\frac{1}{x}+\frac{1}{y}} = \frac{xy}{x+y}$.
In wanting to find a condition such that $x⊕y = x-y$, I found that the ratio between x and y mus be φ=1.618... the golden ratio, for this to work!
$x⊕y=x-y$
$\frac{1}{\frac{1}{x}+\frac{1}{y}} = x-y$
$\frac{xy}{x+y} = x-y$
$xy = x^2-y^2$
$0 = x^2-xy-y^2$
and, using the quadratic formula,
$x = \frac{y±\sqrt{y^2+4y^2}}{2}$
$x = y\frac{1±\sqrt{5}}{2}$
$x = φy$
This result is amazing in and of itself. Yet through the same basic setup, we find a new ratio pops out if we try $x⊕y = x+y$ and it is complex.
$x⊕y = x+y$
$\frac{1}{\frac{1}{x}+\frac{1}{y}} = x+y$
$\frac{xy}{x+y} = x+y$
$xy = x^2+2xy+y^2$
$0 = x^2+xy+y^2$
$x = \frac{-y±\sqrt{y^2-4y^2}}{2}$
$x = y\frac{1±\sqrt{-3}}{2}$
$x = y\frac{1±\sqrt{3}i}{2}$
and this is the "imaginary golden ratio"!
$φ_i = \frac{1+\sqrt{3}i}{2}$
It has many properties of the golden ratio, mirrored. This forum from 2011 is the only literature I could dig up on it, and it explains most of the properties I also found and more. http://mymathforum.com/number-theory/17605-imaginary-golden-ratio.html
This number is extremely cool, because its mathematical properties mirror φ but also have their own coolness.
$φ_i = 1-\frac{1}{φ_i}$
$φ_i^2 = φ_i - 1$
and generally
$φ_i^n = φ_i^{n-1} - φ_i^{n-2}$
This complex ratio also lies on the unit circle in the complex plane, and has a representation as a power of e!
$φ_i = cos(π/3)+ isin(π/3) = e^{iπ/3}$
$|φ_i|=1$
It is also a nested radical, because of the identity $φ_i^2 + 1 = φ_i$
$φ_i=\sqrt{-1+\sqrt{-1+\sqrt{-1+\sqrt{-1+...}}}}$
Since the only other forum which I could find that has acknowledged the existence of the imaginary golden ratio (other than the context of it as a special case imaginary power of e) I'd like to share my findings and ask if anybody has heard of this ratio before, and if anybody could offer more fine tuned ideas or explorations into the properties of this number. One specific qustion I have involves its supposed connection (according to the 2011 forum) to the sequence
$f_n = f_{n-1} - f_{n-2}$
$f_0=0$
$f_1=1$
$0,1,1,0,-1,-1,0,1,1,...$
could somebody explain to me how this sequence is connected to φ_i? The forum states there is a connection, but I can't figure out what it is based on the wording. What am I missing?
Thanks for your help with my question/exploration.
REPLY [2 votes]: I found this number in an unrelated context.
Consider the differential equation:
$$f'(x) = f(f(x))$$
The function that is the solution to this has the property that its derivative is the same as composing the function with itself. Assume the function takes the form of $f(x) = A x^r$ where $A$ and $r$ are constants. Then:
$$r A x^{r - 1} = A^{r + 1} x^{r^2}$$
Assuming $x$ is nonzero:
$$r A x^{ (r - 1) - r^2} = A^{r + 1}$$
The RHS is a constant and equivalent to the LHS, so the exponent must be $0$, otherwise the LHS would vary with $x$, so:
$$(r - 1) - r^2 = 0$$
The solutions to this are $(1 \pm i \sqrt{3})/2$.
So a function $f(x)$ that has the property of its derivative equaling $f(f(x))$ is of the form $f(x) = A x^r$ where $A$ is some constant and $r$ is the 'imaginary golden ratio'!<|endoftext|>
TITLE: Evalute $ \lim_{n\rightarrow \infty}\sum^{n}_{k=0}\frac{\binom{n}{k}}{n^k(k+3)} $
QUESTION [14 upvotes]: Evaluate $\displaystyle \lim_{n\rightarrow \infty}\sum^{n}_{k=0}\frac{\binom{n}{k}}{n^k(k+3)}$.
$\bf{My\; Try::}$ Although we can solve it by converting into definite Integration.
But I want to solve it without Using Integration.
So $\displaystyle \lim_{n\rightarrow \infty}\sum^{n}_{k=0}\frac{\binom{n}{k}}{n^k(k+3)} = \lim_{n\rightarrow \infty}\sum^{n}_{k=0}\frac{(n-1)(n-2).......(n-k+1)}{k!\cdot n^{k}\cdot (k+3)}$
Now How can i solve after that, Help required, Thanks
REPLY [3 votes]: This is not an answer but it is too long for a comment.
What I found interesting is that closed form expressions can be obtained for the partial sums since
$$S^{(j)}_n=\sum^{n}_{k=0}\frac{\binom{n}{k}}{n^k(k+j)}=\frac{\, _2F_1\left(j,-n;j+1;-\frac{1}{n}\right)}{j}$$ and from there, the corresponding limits and asymptotics.
For the case where $j=3$ as in the post $$S^{(3)}_n=\frac{\left(1+\frac{1}{n}\right)^n (n+1) \left(n^2+n+2\right)-2 n^3}{(n+1) (n+2)
(n+3)}=(e-2)+\frac{12-\frac{9}{2}e}{n}+O\left(\frac{1}{n^2}\right)$$ Similarly $$S^{(2)}_n=\frac{n^2+(n+1) \left(1+\frac{1}{n}\right)^n}{(n+1) (n+2)}=1+\frac{e-3}{n}+O\left(\frac{1}{n^2}\right)$$ $$S^{(4)}_n=\frac{6 n^4+\left(n \left(n \left(-2 n^2+n+8\right)+11\right)+6\right)
\left(1+\frac{1}{n}\right)^n}{(n+1) (n+2) (n+3) (n+4)}=(6-2 e)+\frac{22 e-60}{n}+O\left(\frac{1}{n^2}\right)$$ where appear interesting patterns.
Concerning the limit, we can find that
$$\displaystyle \lim_{n\rightarrow \infty} S^{(j)}_n=(-1)^j ((j-1)!-!(j-1)\,e)$$
May be, the asymptotics could be of interest
$$S^{(j)}_n=(-1)^j((j-1)!-!(j-1)\,e) +(-1)^{j+1}\frac{(j+1)!-!(j+1)\,e }{2n}+O\left(\frac{1}{n^2}\right)$$
For $j=3$ and $n=50$, the exact value is $\approx 0.71370532$ while the asymptotics leads to $\approx 0.71363646$.
Update
Following this question of mine, the asymptotics write $$S^{(j)}_n=(-1)^j\left(\left(\alpha_0-\beta_0e\right)-\frac{\left(\alpha_1-\beta_1e\right)}{2n}+\frac{\left(\alpha_2-\beta_2e\right)}{24n^2}\right)+O\left(\frac{1}{n^3}\right)$$ with $$\alpha_0=(j-1)!\qquad \qquad \beta_0=!(j-1)$$ $$\alpha_1=(j+1)!\qquad \qquad \beta_1=!(j+1)$$ $$\alpha_2= 3\times(j+3)! - 8\times(j+2)! \qquad \qquad \beta_2=3\,\times\,!(j+3) - 8\,\times\,!(j+2)$$ Many thanks to achille hui who identified the sequence for $\beta_2$ and provided a nicer expression for $\alpha_2$.
For $j=3$ and $n=50$, the exact value is $\approx 0.71370532$ while the new asymptotics leads to $\approx 0.71370644$.<|endoftext|>
TITLE: How many natural numbers $x\leqslant 21 !$ there are such that $\gcd(x,20!)=1$
QUESTION [5 upvotes]: How many natural numbers $x\leqslant 21 !$ there are such that $\gcd(x,20!)=1$
Attempt:
I used this methode and I have found that:
$21!$$=2^{18}\times3^{9}\times5^{4}\times7^{3}\times11\times13\times17\times19$
$20!$$=2^{18}\times3^{\color{red}8}\times5^{4}\times7^{\color{red}2}\times11\times13\times17\times19$
It's easy to see that the prime factorization contains the same prime numbers, but how can I know how many numbers $x\leqslant 21 !$ there are such that $\gcd(x,20!)=1$
If I look at $3$ and $3^2$, so there are $4$ numbers between $3$ and $9$ such that $\gcd(3,x)=1$ $3,\color{blue}4,\color{blue}5,6,\color{blue}7,\color{blue}8,9$
REPLY [4 votes]: You can easily notice that for a natural number $n$, gcd$(n,20!)=1$ if and only if gcd$(n,21!)=1$. Thus you can ask the question as follows :
How many natural numbers $x\leq 21!$ such that gcd($x,21!)=1$. Then the answer is $\varphi(21!)$ which is easy to calculate.<|endoftext|>
TITLE: How far is the list of known primes known to be complete?
QUESTION [56 upvotes]: So there is always the search for the next "biggest known prime number". The last result that came out of GIMPS was $2^{74\,207\,281} - 1$, with over twenty million digits. Wikipedia also lists the twenty highest known prime numbers, only the four smallest on that list have fewer than three million digits.
For some while now, I have been wondering about the smaller prime numbers we haven't found. How far up is the list of known primes known to be complete? Since $500$ to $1000$ digit primes are considered safe for the RSA algorithm, I'd assume that it's well below that. How far along the number line have we checked that there are no more primes to be found? How fast is this boundary moving forward, currently? Have we, for instance, checked the primality of all numbers below $10^{100}$, or are we stuck somewhere south of $10^{20}$?
REPLY [7 votes]: Listing primes in order is a fairly trivial problem — in fact, I believe we have programs that can compute lists of primes faster than they can be written to disk, let alone displayed in any human readable format.
The thing is, there are a lot of primes. The entire internet put together probably couldn't store the list of all 20 digit primes.
But that's okay, because we don't need lists of primes — whenever we need primes, we can just generate them.<|endoftext|>
TITLE: How long until I get out of bed?
QUESTION [7 upvotes]: Suppose I have two independent alarm clocks which I set right before I go to bed. Their ring times are exponentially distributed with rates $\lambda_1$ and $\lambda_2$. Whenever alarm 1 goes off I immediately reset both alarms, but if alarm 2 goes off I actually get up.
How long do I stay in bed? Am I right in saying that alarm clock one is redundant with respect to bed staying time?
REPLY [3 votes]: Yes, you are right. This comes down to the fact that exponential random variables are memoryless. The idea is that a "reset" exponential has the same distribution as a conditioned one. Here's the math:
Let $X$ be exponential with rate $\lambda$, and let $b > a$. Then $$\mathbb{P}(X > b | X > a) = \frac{\mathbb{P}(X > b)}{\mathbb{P}(X > a)} = \frac{e^{-\lambda b}}{ e^{-\lambda a}} = e^{-\lambda(b-a)} = \mathbb{P}(X > b - a).$$
In other words, if we know that alarm 2 hasn't gone off after $a$ minutes, then the probability it won't go off after another $b - a$ minutes is the same as the probability a reset clock won't go off for $b - a$ minutes. This proves that resetting doesn't make a difference for the distribution of alarm 2.<|endoftext|>
TITLE: Idiotic determinant mistake?
QUESTION [9 upvotes]: I need to calculate $$\begin{vmatrix} \lambda & -1 & 0 & 0\\ -1 & \lambda & 0 & 0 \\ 0 & 0 & \lambda & -1 \\ 0 & 0 & -1 & \lambda \end{vmatrix}.$$
For the life of me I don't see what my mistake is: expanding in the first row we have
$$\lambda^2(\lambda ^2-1)-(-1)(-1)(\lambda ^2-1)= (\lambda^2-1)^2 .$$ What is my error?
REPLY [15 votes]: As everyone comments, my answer is correct.<|endoftext|>
TITLE: Confusion Regarding Munkres's Definition of Basis for a Topology
QUESTION [14 upvotes]: The definition of Basis for a Topology as given in Munkres's book is as follows,
If $X$ is a set, a basis for a topology on $X$ is a collection $\mathcal{B}$ of subsets of $X$ (called basis elements) such that
For each $x∈X$, there is at least one basis element $B$ containing $x$
If $x$ belongs to the intersection of two basis elements $B_1$ and $B_2$, then there is a basis
element $B_3$ containing $x$ such that $B_3⊆B_1∩B_2$.
My question is,
In the definition though we are defining "basis for a topology", there is no mention of the topology on $X$. What role the topology on $X$ plays in the definition? Is it just a simple printing mistake or I am missing something?
REPLY [14 votes]: This is a completely revised version of my previous answer, which was not correct. The existence of a major error in my previous answer was recently pointed out to me by Merk Zockerborg.
The main result relevant to the question asked by user 170039 is Corollary 2 below.
Contents
Basis for a topological space $(X,\tau).$
Given ${\mathcal B} \subseteq {\cal P}(X),$ find a topology ${\tau}_{\mathcal B}$ on $X$ such that $\mathcal B$ is a basis for $(X,\tau_{\mathcal B}).$
Comparing $\tau$ and $\tau_{\mathcal B}$ when $(X,\tau)$ is a topological space and ${\mathcal B} \subseteq {\cal P}(X)$ is such that $(X,\tau_{\mathcal B})$ is a topological space.
Notation: $X$ is a set and ${\cal P}(X)$ is the collection of all subsets of $X.$ The (sometimes subscripted) symbols $\tau$ and $\mathcal B$ will denote subsets of ${\cal P}(X).$ Most of the time (but not always) $\tau$ will be a topology on $X$ and $\mathcal B$ will be a basis for a topology on $X.$
In the spirit of making this an expository overview of some introductory ideas that arise when introducing the notion of a basis for a topological space, I am not including proofs of the stated theorems. However, those actively studying this topic in topology may find it useful to provide proofs.
1. Basis for a topological space $(X,\tau).$
Definition: Let $(X,\tau)$ be a topological space. A basis for $(X,\tau)$ is a collection $\mathcal B$ of subsets of $X$ such that:
(a) $\;\;\mathcal B \subseteq \tau$
(b) $\;$ For each $x \in X,$ and for each $U \in \tau$ such that $x \in U,$ there exists $V \in \mathcal B$ such that $x \in V$ and $V \subseteq U.$
Note: The term "base" is more commonly used than "basis", but I'll use "basis" in agreement with the usage by user 170039.
Roughly speaking, if we consider (open) neighborhoods of $x$ as "measures of being close to $x$", where being closer to $x$ is described by the use of a smaller (in the subset sense) neighborhood of $x,$ then a basis for $\tau$ is a subcollection of open sets that is sufficient to describe being arbitrarily close to any given point.
Note that this notion of being close to $x$ has more structure to it than a notion entirely based on the subset relation, because we also have the property that any finite number of "close to $x$ conditions" can be replaced by a single "close to $x$ condition", due to the finite intersection property of open sets. More specifically, being simultaneously $U_1$-close to $x$ and $U_2$-close to $x$ can be replaced by being $(U_1 \cap U_2)$-close to $x$ in the case of $\tau,$ and by being $V$-close to $x$ for some $\mathcal B$-element $V$ such that $V \subseteq U \overset{\text{def}}{=} U_1 \cap U_2$ in the case of $\mathcal B.$ For example, suppose $X = \{x,y,z\}$ and $\tau = \{\;\{x,y\}, \; \{y,z\}\;\}.$ Note that $\tau$ is not a topology on $X.$ Nonetheless, we can still talk about being close to elements of $X.$ For example, there is the notion of being "$\{x,y\}$-close to $y$" and there is the notion of being $\{y,z\}$-close to $y.$ However, note that there is no single closeness condition that implies each of these conditions, because the only subset of $X$ containing $y$ that is a subset of $\{x,y\}$ and a subset of $\{y,z\}$ is $\{y\},$ and $\{y\} \not \in \tau.$
A topology can have more than one basis. For example, in the case of the real numbers with its usual topology, the collection of all open intervals of finite length is a basis and the collection of all open intervals of finite length with rational endpoints is a basis.
Question: [This occurred to me when I was writing these notes, and I thought others here might be interested.] How many bases exist for the usual topology of the real numbers? The answer is $2^{c}.$ Since each such basis is a collection of open sets, and there are $2^c$ many collections of open sets of $\mathbb R$ (because there are $c$ many open sets of ${\mathbb R}),$ it follows that there are at most $2^c$ many bases for the usual topology of the real numbers. The following shows that there are at least $2^c$ many bases for the usual topology of the real numbers. Let $D_1$ and $D_2$ be disjoint subsets of ${\mathbb R}$ such that $D_1$ has cardinality $c$ and $D_2$ is dense-in-${\mathbb R}$ (for example, $D_1$ could be the set of irrational numbers between $0$ and $1,$ and $D_2$ could be the set of rational numbers), and consider the collection of open intervals each of whose endpoint(s) belongs to $D_1 \cup D_{2}.$ This collection of open intervals is a basis for ${\mathbb R}.$ Moreover, if we remove from this collection any set consisting only of open intervals each of whose endpoint(s) belongs to $D_{1},$ then what remains will also be a basis, and there are $2^c$ many ways to remove such sets of intervals from the original collection of open intervals (because there are $2^c$ many subsets of the collection of open intervals each of whose endpoint(s) belongs to $D_{1}).$
Theorem 1: Let $(X,\tau)$ be a topological space and let $\mathcal B$ be a collection of subsets of $X$ such that:
(a) $\;\;\mathcal B \subseteq \tau$
(b) $\;$ Every element of $\tau$ is the union of some subcollection of elements in $\mathcal B.$
Then $\mathcal B$ is a basis for $(X,\tau).$
Regarding (b) above, we are using the convention that an empty union of sets is possible, and that an empty union of sets is equal to the empty set.
Theorem 2: Let $(X,\tau)$ be a topological space and let $\mathcal B$ be a basis for $(X,\tau).$ Then every element of $\tau$ is the union of some subcollection of elements in $\mathcal B.$
Theorems 1 and 2 together imply that we could have defined "$\mathcal B$ is a basis for $(X,\tau)$" by replacing (b) in our earlier definition with (b) in Theorem 1. This alternate way of defining a basis for $(X,\tau)$ is used in some books.
Note that (b) in Theorem 1 provides a way to generate all elements of $\tau$ from the elements in $\mathcal B$ — take unions. This is analogous to having a way to generate all elements in a vector space from a (vector space) basis — take linear combinations.
2. Given ${\mathcal B} \subseteq {\cal P}(X),$ find a topology ${\tau}_{\mathcal B}$ on $X$ such that $\mathcal B$ is a basis for $(X,\tau_{\mathcal B}).$
Suppose that no topology on the set $X$ has been provided. Can we obtain a topology ${\tau}_{\mathcal B}$ on $X$ by picking some collection $\mathcal B \subseteq {\cal P}(X)$ and letting ${\tau}_{\mathcal B}$ be the collection of all possible unions of elements in $\mathcal B$?
Example 1: Let $X = \{x,y\}$ and let $\mathcal B = \{\; \{x\}\;\}.$ Then the collection of all possible unions of elements in $\mathcal B$ gives ${\tau}_{\mathcal B} = \{\; \emptyset,\;\{x\}\;\},$ but ${\tau}_{\mathcal B}$ is not a topology on $X,$ because $X \notin {\tau}_{\mathcal B}.$ (A simpler example is to use $\mathcal B = \emptyset,$ but I wanted to give a less trivial example.)
Perhaps we can fix the problem that arises in Example 1 by assuming $\cup {\mathcal B} = X.$ Note that $\cup {\mathcal B} = X$ is equivalent to the first bullet assumption in user 170039’s statement of the definition of "basis for a topology" from Munkres's book. Example 2 shows that having $\cup {\mathcal B} = X$ is not sufficient for ${\tau}_{\mathcal B}$ to be a topology on $X.$
Example 2: Let $X = \{x,y,z\}$ and let ${\mathcal B} = \{\;\{x,y\},\;\{y,z\}\;\}.$ Then $\cup {\mathcal B} = X.$ However, the collection of all possible unions of elements in $\mathcal B$ gives ${\tau}_{\mathcal B} = \{\;\emptyset,\; \{x,y\},\;\{y,z\},\; X\;\},$ and ${\tau}_{\mathcal B}$ is not a topology because ${\tau}_{\mathcal B}$ is not closed under finite intersections — note that $\{x,y\} \cap \{y,z\} = \{y\}$ and $\{y\} \notin {\tau}_{\mathcal B}.$
It turns out that fixing both problems, namely the problem that Example 1 illustrates and the problem that Example 2 illustrates, is sufficient for ${\tau}_{\mathcal B}$ to be a topology on $X.$
Theorem 3: Let $X$ be a set and let $\mathcal B$ be a collection of subsets of $X$ such that:
(a) $\;\; \cup {\mathcal B} = X$
(b) $\;$ For each positive integer $n,$ if $B_1 \in \mathcal B$ and $B_2 \in \mathcal B$ and $\cdots$ and $B_n \in {\mathcal B},$ then $B_1 \cap B_2 \cap \cdots \cap B_n$ can be written as the union of (possibly infinitely many) elements in ${\mathcal B}.$
If we let ${\tau}_{\mathcal B}$ be the collection of all possible unions of elements in $\mathcal B,$ then $(X,{\tau}_{\mathcal B})$ is a topological space and $\mathcal B$ is a basis for $(X,{\tau}_{\mathcal B}).$
In some books Theorem 3 is buried in results about a subbasis for a topological space. Explicit statements of it can be found in Theorem 3.50 on p. 26 of Introduction to General Topology by Helen Frances Cullen (1968) and in 2.2 Base Characterization on p. 153 of An Introduction to Topology and Homotopy by Allan John Sieradski (1992). On the off-chance that anyone looks at Cullen's book, note that in her book she assumes an empty intersection gives the underlying universal set (see top of p. 416). Since this is not a standard assumption, I've avoided this assumption by including (a).
Theorem 4 is the more commonly stated condition for a collection of subsets of $X$ to generate a topology on $X,$ where the method of generation is by using the collection of all possible unions of elements in that collection of subsets. At the risk of stating the obvious, if $\mathcal B$ satisfies the assumptions in Theorem 3 and $\mathcal B$ also satisfies the assumptions in Theorem 4 (in fact, one can show that satisfying the assumptions in either theorem implies satisfying the assumptions in the other theorem), then the topology obtained using Theorem 3 is the same as the topology obtained using Theorem 4. This follows from the fact that in each theorem the same procedure is used to obtain the topology, namely the topology is the collection of all possible unions of elements in ${\mathcal B}.$
Theorem 4: Let $X$ be a set and let $\mathcal B$ be a collection of subsets of $X$ such that:
(a) $\;\; \cup {\mathcal B} = X$
(b) $\;$ If $U \in \mathcal B$ and $V \in \mathcal B$ and $x \in U \cap V,$ then there exists $W \in \mathcal B$ such that $x \in W$ and $W \subseteq U \cap V.$
If we let ${\tau}_{\mathcal B}$ be the collection of all possible unions of elements in $\mathcal B,$ then $(X,{\tau}_{\mathcal B})$ is a topological space and $\mathcal B$ is a basis for $(X,{\tau}_{\mathcal B}).$
One the things that (b) in Theorem 4 does is to incorporate our earlier observation (long paragraph a little below the definition of "basis") that finitely many "close to $x$" conditions can be replaced by a single "close to $x$" condition. Note that in Example 2, ${\tau}_{\mathcal B}$ has the property that "$\{x,y\}$-close to $y$ and $\{y,z\}$-close to $y$" cannot be equaled or strengthened by any single ${\tau}_{\mathcal B}$-closeness notion.
In our earlier vector space analogy, Theorems 3 and 4 are somewhat akin to finding conditions under which a set of vectors is a linearly independent set, then defining the linear span of those vectors, and finally observing that the linear span of those vectors forms a vector space.
3. Comparing $\tau$ and $\tau_{\mathcal B}$ when $(X,\tau)$ is a topological space and ${\mathcal B} \subseteq {\cal P}(X)$ is such that $(X,\tau_{\mathcal B})$ is a topological space.
Let $(X,\tau)$ be a topological space and let $\mathcal B$ be a collection of subsets of $X$ satisfying the assumptions in either Theorem 3 or Theorem 4 (or both). In this situation we have two topological spaces to contend with, $(X,\tau)$ and $(X,\,\tau_{\mathcal B}).$ In general, we have $\tau \neq \tau_{\mathcal B}.$ Indeed, there is no reason why we would expect there to be any relation between $\tau$ and $\tau_{\mathcal B},$ aside from the fact that each is a collection of subsets of $X$ that satisfies the axioms for a topology on $X.$
Example 3: Suppose ${\mathcal B} \not \subseteq {\tau}.$ Then it is easy to see that $\tau \neq \tau_{\mathcal B}.$ (Proof: Let $B \in \mathcal B$ such that $B \not \in {\tau}.$ Then $B$ belongs to $\tau_{\mathcal B},$ because we always have ${\mathcal B} \subseteq \tau_{\mathcal B},$ and hence $B$ does not belong to ${\tau}.$ Therefore, $\tau \neq \tau_{\mathcal B}.)$
Example 4, due to Merk Zockerborg, shows that ${\mathcal B} \subseteq {\tau}$ is not sufficient to conclude that $\tau = \tau_{\mathcal B}.$
Example 4: Let $X = \{x,y\}$ and $\tau = \{\;\emptyset,\;\{x\},\;\{y\},\;\{x,y\}\;\}$ and ${\mathcal B} = \{\;\{x\},\;\{x,y\}\;\}.$ Then ${\mathcal B} \subseteq {\tau}$ and $\tau_{\mathcal B} = \{\;\emptyset,\;\{x\},\;\{x,y\}\;\}.$ Therefore, $\tau \neq \tau_{\mathcal B}.$
Corollary 2 below gives one possible assumption that, along with the assumption ${\mathcal B} \subseteq {\tau},$ is sufficient to conclude that $\tau = \tau_{\mathcal B}.$
Theorem 5: Let $X$ be a set and let ${\mathcal B}_1,$ ${\mathcal B}_2$ each be collections of subsets of $X$ that satisfy the assumptions in either Theorem 3 or Theorem 4. Then ${\tau}_{{\mathcal B}_1} \subseteq {\tau}_{{\mathcal B}_2}$ if and only if the following assertion holds:
For each $B_1 \in {\mathcal B}_1$ and for each $x \in B_{1},$ there exists $B_2 \in {\mathcal B}_2$ such that $x \in B_2$ and $B_2 \subseteq B_{1}.$
$\;$
Corollary 1: Let $X$ be a set and let ${\mathcal B}_1,$ ${\mathcal B}_2$ each be collections of subsets of $X$ that satisfy the assumptions in either Theorem 3 or Theorem 4. Then ${\tau}_{{\mathcal B}_1} = {\tau}_{{\mathcal B}_2}$ if and only if both of the following assertions hold:
For each $B_1 \in {\mathcal B}_1$ and for each $x \in B_{1},$ there exists $B_2 \in {\mathcal B}_2$ such that $x \in B_2$ and $B_2 \subseteq B_{1}.$
For each $B_2 \in {\mathcal B}_2$ and for each $x \in B_{2},$ there exists $B_1 \in {\mathcal B}_1$ such that $x \in B_1$ and $B_1 \subseteq B_{2}.$
$\;$
Corollary 2: Let $(X,\tau)$ be a topological space and let $\mathcal B$ be a collection of subsets of $X$ that satisfies the assumptions in either Theorem 3 or Theorem 4 such that:
(a) $\;\; {\mathcal B} \subseteq {\tau}$
(b) $\;$ For each $U \in \tau$ and for each $x \in U,$ there exists $B \in \mathcal B$ such that $x \in B$ and $B \subseteq U.$
Then $\tau = \tau_{\mathcal B}.$
Proof of Corollary 2: From ${\mathcal B} \subseteq {\tau}$ and the fact that $\tau$ is closed under arbitrary unions, it follows that $\tau_{\mathcal B} \subseteq {\tau}.$ Applying Theorem 5 with ${\mathcal B}_1 = \tau$ and ${\mathcal B}_2 = {\mathcal B},$ and observing that ${\tau}_{{\mathcal B}_1} = {\tau}_{\tau} = \tau$ (because $\tau$ is closed under arbitrary unions), we get $\tau \subseteq \tau_{\mathcal B}.$ (The notation is slightly confusing, since in ${\tau}_{\tau}$ the subscripted $\tau$ is a topology on $X$ and the other $\tau$ is part of the notation that symbolizes the result of carrying out a certain operation on subscripted set.)<|endoftext|>
TITLE: Why is this proof of the chain rule incorrect?
QUESTION [7 upvotes]: I saw this proof of the chain rule but it says this is a flawed proof. Why? I guessed the reason it is wrong because you can't substitute $g(x+h)$ and $g(x)$ into in limit.
REPLY [3 votes]: I must say it is a very weird proof (or perhaps a weird attempt at proof of chain rule) and it does not really capture the essence of chain rule which says that:
Chain Rule: If $g$ is differentiable at $a$ and $f$ is differentiable at $g(a)$ then $f\circ g$ is differentiable at $a$ and $(f\circ g)'(a) = f'(g(a))g'(a)$.
The proof presented in your post assumes that $(f\circ g)'(a)$ exists (whereas this is the conclusion and not hypotheses of chain rule). Another problem is that it assumes $g'(a) \neq 0$ which is sort of a very artificial restriction. So the proof given in your post is not about chain rule, but rather it is a proof of the following result (which can at best be called a simpler version of chain rule):
Theorem: If $g$ is differentiable at $a$ with $g'(a) \neq 0$ and $f$ is differentiable at $g(a)$ and $f\circ g$ is differentiable at $a$ then $(f\circ g)'(a) = f'(g(a))g'(a)$.
BTW I hope your book has given a proper proof of the chain rule and is then comparing it with one of the many flawed proofs available in calculus textbooks. If not then you need to consider two cases in proof of chain rule: 1) when $g'(a) \neq 0$ and 2) when $g'(a) = 0$.
The first case is easy. The fact that $$g'(a) = \lim_{h \to 0}\frac{g(a + h) - g(a)}{h}\neq 0$$ means that there is a value $\delta > 0$ such that $$\frac{g(a + h) - g(a)}{h}\neq 0$$ for all $h$ with $0 < |h| < \delta$. This ensures that $g(a + h) - g(a) \neq 0$ for $0 < |h| < \delta$.
Now we have
\begin{align}
(f\circ g)'(a) &= \lim_{h \to 0}\frac{f(g(a + h)) - f(g(a))}{h}\notag\\
&= \lim_{h \to 0}\frac{f(g(a + h)) - f(g(a))}{g(a + h) - g(a)}\cdot\frac{g(a + h) - g(a)}{h}\notag\\
&= \lim_{k \to 0}\frac{f(g(a) + k) - f(g(a))}{k}\cdot\lim_{h \to 0}\frac{g(a + h) - g(a)}{h}\text{ (putting }k = g(a + h) - g(a))\notag\\
&= f'(g(a))g'(a)\notag
\end{align}
If $g'(a) = 0$ then we need to establish that $(f\circ g)'(a) = 0$. Let $\epsilon > 0$ be given. We will find a $\delta > 0$ such that $$\left|\frac{f(g(a + h)) - f(g(a))}{h}\right| < \epsilon$$ for $0 < |h| < \delta$. Since $f'(g(a))$ exists it follows that the ratio $$\frac{f(g(a) + k) - f(g(a))}{k}$$ is bounded for all sufficiently small values of $k$. To put rigorously, there exist real numbers $M > 0, \delta_{1} > 0$ such that $$\left|\frac{f(g(a) + k) - f(g(a))}{k}\right| < M\tag{1}$$ for all $0 < |k| < \delta_{1}$.
Further $g'(a) = 0$ means that there is a $\delta > 0$ such that $$|g(a + h) - g(a)| < \delta_{1}, \left|\frac{g(a + h) - g(a)}{h}\right| < \frac{\epsilon}{M}\tag{2}$$ for all $h$ with $0 < |h| < \delta$.
Let $k = g(a + h) - g(a)$. If $k = 0$ then $f(g(a + h)) - f(g(a)) = 0$ so that $$\left|\frac{f(g(a + h)) - f(g(a))}{h}\right| < \epsilon$$ trivially and if $k \neq 0$ then using $(1)$ and $(2)$ we see that
\begin{align}
\left|\frac{f(g(a + h)) - f(g(a))}{h}\right| &= \left|\frac{f(g(a) + k) - f(g(a))}{k}\right|\cdot\left|\frac{g(a + h) - g(a)}{h}\right|\notag\\
&< M \cdot\frac{\epsilon}{M} = \epsilon\notag
\end{align}
for all values of $h$ with $0 < |h| < \delta$. We have thus established the chain rule for the case when $g'(a) = 0$.<|endoftext|>
TITLE: Is $77!$ divisible by $77^7$?
QUESTION [11 upvotes]: Can $77!$ be divided by $77^7$?
Attempt:
Yes, because $77=11\times 7$ and $77^7=11^7\times 7^7$ so all I need is that the prime factorization of $77!$ contains $\color{green}{11^7}\times\color{blue} {7^7}$ and it does.
$$77!=77\times...\times66\times...\times55\times...\times44\times...\times33\times...\times22\times...\times11\times...$$
and all this $\uparrow$ numbers are multiples of $11$ and there are at least $7$ so $77!$ contains for sure $\color{green}{11^7}$
And $77!$ also contains $\color{blue} {7^7}:$
$$...\times7\times...\times14\times...\times21\times...\times28\times...\times35\times...42\times...49\times...=77!$$
I have a feeling that my professor is looking for other solution.
REPLY [4 votes]: Although, the answer is already provided, I can't stress the usefulness of the Legendre's Theorem for resolving such class of problems. Especially the last one:
According to this really easy to grasp and remember result:
$$\nu_7(77!)=\frac{77-5}{7-1}=12$$
$$\nu_{11}(77!)=\frac{77-7}{11-1}=7$$
Which means $$7^{12}\cdot 11^7 \mid 77!$$<|endoftext|>
TITLE: Average distance between two random points on a square with sides of length $1$
QUESTION [5 upvotes]: "What's the average distance between two random points on a square with sides of length $1$?"
Here is an attempt which is wrong but I can't see how exactly.
Fix $(x, y) \in [0, 1]^2$
The average distance from $x$ to some $x' \in [0,1]$ is
$\triangle x = 0.5x^2 + 0.5(1-x)^2 $
Likewise $\triangle y = 0.5y^2 + 0.5(1-y)^2 $
One could argue that the average distance between fixed $(x, y)$ and some $(x', y') \in [0,1]^2$ is then $\triangle r = \sqrt{\triangle y^2 + \triangle x^2} $
Then just take the average of $\triangle r(x,y)$. Double integrating $\triangle r$ in terms of x and y over the boundaries gives around 0.47.
Close but not correct. Why doesn't this work?
REPLY [5 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,\mathrm{Li}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
Note that, in $\ds{2D}$,
\begin{align}
r & = {1 \over 3}\,\nabla\cdot\pars{r\,\vec{r}} =
{1 \over 3}\bracks{\partiald{\pars{rx}}{x} + \partiald{\pars{ry}}{y}} =
{1 \over 3}\bracks{\partiald{\pars{rx}}{x} - \partiald{\pars{-ry}}{y}}
\\[3mm] & =
{1 \over 3}\bracks{\nabla\times\pars{-ry\,\hat{x} + rx\,\hat{y}}}_{z} =
-\,{1 \over 3}\braces{\nabla\times\bracks{\root{x^{2} + y^{2}}
\pars{y\,\hat{x} - x\,\hat{y}}}}_{z}\tag{1}
\end{align}
Lets $\ds{\vec{r} \equiv \pars{x,y}}$ and $\ds{\vec{R} \equiv \pars{X,Y}}$.
\begin{align}
\color{#f00}{?} & =
\iint_{\pars{0,1}^{\,\, 4}}\ \verts{\vec{r} -\vec{R}}\,\dd^{2}\vec{r}
\,\dd^{2}\vec{R} =
\int_{\pars{0,1}^{\,\, 2}}\bracks{\int_{\pars{0,1}^{\,\, 2}}\
\verts{\vec{R} -\vec{r}}\,\dd^{2}\vec{R}}\dd^{2}\vec{r}\tag{2}
\end{align}
With the identity $\ds{\pars{1}}$:
\begin{align}
&\!\!\!\!\!\int_{\pars{0,1}^{\,\, 2}}\verts{\vec{R} -\vec{r}}\,\dd^{2}\vec{R} =
-\,{1 \over 3}\oint_{\pars{0,1}^{\,\, 2}}
\root{\pars{X - x}^{2} + \pars{Y - y}^{2}}
\bracks{\pars{Y - y}\,\dd X - \pars{X - x}\,\dd Y}
\\[8mm]= &\
\!\!-\,{1 \over 3}\int_{0}^{1}
\root{\pars{X - x}^{2} + y^{2}}\pars{-y}\,\dd X -
{1 \over 3}\int_{0}^{1}
\root{\pars{1 - x}^{2} + \pars{Y - y}^{2}}\bracks{-\pars{1 - x}}\,\dd Y
\\[3mm] &\
\!\!-\,{1 \over 3}\int_{1}^{0}
\root{\pars{X - x}^{2} + \pars{1 - y}^{2}}\pars{1 - y}\,\dd X -
{1 \over 3}\int_{1}^{0}
\root{x^{2} + \pars{Y - y}^{2}}\bracks{-\pars{-x}}\,\dd Y
\\[8mm] = &\
{2 \over 3}\,y\,\mathrm{f}\pars{x,y^{2}} +
{2 \over 3}\,x\,\mathrm{f}\pars{y,x^{2}}\tag{3}
\\[3mm] &\
\qquad\qquad\mbox{where}\quad
\mathrm{f}\pars{a,b} \equiv
\int_{0}^{1}\root{\pars{\xi - a}^{2} + b}\,\dd\xi\tag{4}
\end{align}
With $\ds{\pars{3}}$ and $\ds{\pars{4}}$, the expression $\ds{\pars{2}}$ is reduced to:
\begin{align}
\color{#f00}{?} & =
\int_{0}^{1}\int_{0}^{1}\bracks{%
{2 \over 3}\,y\,\mathrm{f}\pars{x,y^{2}} +
{2 \over 3}\,x\,\mathrm{f}\pars{y,x^{2}}}\,\dd x\,\dd y
\\[3mm] & =
{2 \over 3}\int_{0}^{1}\int_{0}^{1}\mathrm{f}\pars{x,y}\,\dd y\,\dd x =
{2 \over 3}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\root{\pars{\xi - x}^{2} + y}
\,\dd\xi\,\dd x\,\dd y
\\[3mm] & =
{4 \over 9}\int_{0}^{1}\int_{0}^{1}
\left.\vphantom{\huge A^{a}}
\bracks{\pars{\xi - x}^{2} + y}^{3/2}\,\,\right\vert_{\ y\ =\ 0}^{\ y\ =\ 1}
\,\,\,\,\,\dd\xi\,\dd x
\\[3mm] & =
{4 \over 9}\int_{0}^{1}\int_{0}^{1}
\braces{\bracks{\pars{\xi - x}^{2} + 1}^{3/2} - \verts{\xi - x}^{3}}
\,\dd\xi\,\dd x
\\[3mm] & =
{4 \over 9}\int_{0}^{1}\int_{-x}^{1 - x}
\pars{\bracks{\xi^{2} + 1}^{3/2} - \verts{\xi}^{3}}
\,\dd\xi\,\dd x
\\[3mm] & =
{4 \over 9}\int_{0}^{1}
\pars{\bracks{\xi^{2} + 1}^{3/2} - \verts{\xi}^{3}}
\pars{\int_{0}^{1 - \xi} - \int_{\xi}^{1}\,\dd x}\,\dd\xi
\\[3mm] & =
{8 \over 9}\int_{0}^{1}\pars{\xi^{2} + 1}^{3/2}\,\dd\xi\ -\
\overbrace{{8 \over 9}\int_{0}^{1}\xi\pars{\xi^{2} + 1}^{3/2}\,\dd\xi}
^{\ds{{8 \over 45}\pars{4\root{2} - 1}}}\
+\
\overbrace{{8 \over 9}\int_{0}^{1}\pars{\xi^{4} - \xi^3}\,\dd\xi}
^{\ds{-\,{2 \over 45}}}
\end{align}
The remaining integration can be straightforward evaluated with the sub$\ds{\ldots\xi = \sinh\pars{\theta}}$. The final result becomes:
$$
\color{#f00}{?} =
\color{#f00}{{1 \over 15}\bracks{2 + \root{2} + 5\ln\pars{1 + \root{2}}}} \approx 0.5214
$$<|endoftext|>
TITLE: Chain rule proof - Apostol
QUESTION [8 upvotes]: Apostol calculus I page 174-175 has the proof of chain rule.
Theorem states: Let f be the composition of two functions u and v, say $f=u \circ v$. Suppose that both derivatives $v'(x)$ and $u'(y)$ exist, where $y=v(x)$. Then derivative $f'(x)$ also exists and is given by the formula $f'(x)=u'(y).v'(x)$.
Proof: The difference quotient for f is (4.12): $\frac{f(x+h)-f(x)}{h}=\frac{u[v(x+h)]-u[v(x)]}{h}$ . Let $y=v(x)$ and let $k=v(x+h)-v(x)$. Then we have $v(x+h)=y+k$ and (4.12) becomes (4.13): $\frac{f(x+h)-f(x)}{h}=\frac{u(y+k)-u(y)}{h}$ .
If $k\neq0$,then we multiply and divide by k and obtain (4.14): $\frac{u(y+k)-u(y)}{h}\frac{k}{k}=\frac{u(y+k)-u(y)}{k}\frac{v(x+h)-v(x)}{h}$. When h goes to 0, last quotient on right becomes $v'(x)$. Also, as $h$ goes to $0$, $k$ also goes to $0$ because $k=v(x+h)-v(x)$ and $v$ is continuous at $x$. Therefore the first quotient on the right approaches $u'(y)$ as $h$ tends to zero and this proves the result. $\square$
Although the foregoing argument seems to be the most natural way to proceed, it is not completely general. Since $k=v(x+h)-v(x)$, it may happen that $k=0$ for infinitely many values of $h$ as $h$ tends to zero in which case the passage from (4.13) to (4.14) is not valid.
My doubt:
I have trouble understanding the line "it may happen that $k=0$ for infinitely many values of $h$ as $h$ tends to zero" What is this line trying to convey and why is the proof incorrect?
Thanks in advance.
REPLY [2 votes]: Apostol's proof is common, but wrong. We need a denominator-free description of the derivative. This is provided by the following
Lemma. The function $f$ is differentiable at the interior point $a$ of its domain iff there is a function $m$ with the same domain, and continuous at $a$, satisfying
$$f(x)-f(a)=m(x)\>(x-a)\ .$$
The value $m(a)$ is called the derivative of $f$ at $a$.
Denote this function by $m_{f,\,a}$ when necessary. We then have
$$\eqalign{f(x)-f(a)&=u\bigl(v(x)\bigr)-u\bigl(v(a)\bigr)=m_{u,\,v(a)}\bigl(v(x)\bigr)\ \bigl(v(x)-v(a)\bigr)\cr &=m_{u,\,v(a)}\bigl(v(x)\bigr)\ m_{v,\,a}(x)\ (x-a)\ .\cr}\tag{1}$$
The factor
$$g(x):=m_{u,\,v(a)}\bigl(v(x)\bigr)\ m_{v,\,a}(x)$$
in $(1)$ is continuous at $x=a$ and has the value $u'\bigl(v(a)\bigr)\>v'(a)$ there. The Lemma then implies that $(u\circ v)'(a)$ exists, and has the proposed value.<|endoftext|>
TITLE: Is $n^7 - 77$ ever a Fibonacci number?
QUESTION [11 upvotes]: As the question title suggests, is $n^7 - 77$ ever a Fibonacci number, where $n$ is a integer?
REPLY [27 votes]: The fibonacci sequence goes as follows:
$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
F_n &1&1&2&3&5&8&13&21&34&55&89&144&233&377&610&987\\
F_n\pmod{29}&1&1&2&3&5&8&13&21&5 &26&2&28&1&0&1&1\end{array}$
It is clear that the cycle will repeat for $F_n\pmod{29}$ as we reached again a point where it is two ones in a row.
So, the set of possible values modulo $29$ that a fibonacci number can be is $\{0,1,2,3,5,8,13,21,26,28\}$
On the other hand, $n^7\pmod{29}$ can only ever be $0,1,12,17,28$ by fermat's little theorem.
Thus, $n^7-77$ can only ever be one of $\{9,10,11,22,27\}$
We see that the set of possible values of $n^7-77\pmod{29}$ does not intersect the set of possible values of $F_n\pmod{29}$, therefore no solutions exist.<|endoftext|>
TITLE: Numbers whose reciprocals sum to $1$
QUESTION [8 upvotes]: What are all the numbers that can be written as $a_1+a_2+\dots+a_n$, where $a_1,\dots,a_n$ are positive integers such that $\frac{1}{a_1}+\dots+\frac{1}{a_n}=1$? For instance, such numbers include $4=2+2$, $11=2+3+6$, and $16=4+4+4+4$.
Is there a characterization of such numbers? The first few are $1, 4, 9$ and $11$.
REPLY [8 votes]: These are called Egyptian numbers. It is known that all numbers greater than $23$ are Egyptian, so you get a characterization by listing a finite list of non-Egyptian numbers.<|endoftext|>
TITLE: What does $\mathbb{R}^n \to \mathbb{R}^m$ mean? And what is $\mathbb{R}^n$?
QUESTION [11 upvotes]: What the does $\mathbb{R^n}$ mean? For example if something says that it is a transformation $T:\mathbb{R}^2 \rightarrow \mathbb{R}^3$. Does that mean that $\mathbb{R}^2 = 2 \times 2$ matrix? and that $\mathbb{R}^3 = 3 \times 3 $ matrix?
REPLY [6 votes]: The symbol $\Bbb R^n$ refers to $n$-dimensional Euclidean space. As a set, it is the collection of all $n$-tuples of real numbers. That is,
$$
\Bbb R^n=\{(x_1,\dotsc,x_n):x_1,\dotsc,x_n\in\Bbb R\}
$$
For example $\Bbb R^2$ is the collection of all pairs of real numbers $(x,y)$, sometimes referred to as the Euclidean plane. The set $\Bbb R^3$ is the collection of all triples of numbers $(x,y,z)$, sometimes referred to as $3$-space.
Now, it is a fact that every linear transformation $T:\Bbb R^n\to\Bbb R^m$ is of the form $T(x)=Ax$ for some $m\times n$ matrix $A$.
In general, a function $F:\Bbb R^n\to\Bbb R^m$ is of the form
$$
F(x_1,\dotsc,x_n)=\bigl(f_1(x_1,\dotsc,x_n),\dotsc,f_m(x_1,\dotsc,x_n)\bigr)
$$
where $f_1,\dotsc,f_m$ are functions $\Bbb R^n\to\Bbb R$.<|endoftext|>
TITLE: Making perfect cube from factors of $20!$
QUESTION [7 upvotes]: At least how many different factors of $20!$ must we choose so that we can always find some subset whose product is a perfect cube? For example, if we choose $\{2,3,5,7,11,13,17,19,22,26,34,38\}$, then no subset has a product that is a perfect cube, so the answer must be more than $12$.
We can write
$$20!=2^{18}\times 3^8\times 5^4\times 7^2\times 11\times 13\times 17\times 19$$
The total number of factors is $(18+1)(8+1)(4+1)(2+1)(1+1)^4=41040$, but I don't expect we need to use anything close to that.
REPLY [2 votes]: Seventeen.
There are sixteen factors that are primes or $2^3$ multiplied by a prime, and no non-empty product of these numbers is a perfect cube.
In the other direction, given a sequence of $17$ factors of $20!,$ we can map each factor $x_i=2^{n_{i,1}}3^{n_{i,2}}5^{n_{i,3}}\cdots 19^{n_{i,8}}$ to the tuple $n_i=(n_{i,1},\dots,n_{i,8})$ in the group $G=(\mathbb Z/3\mathbb Z)^8.$ By Olson's theorem (J.E. Olson, "A combinatorial problem on finite abelian groups I–II" J. Number Th. , 1 (1969) pp. 8–10; 195–199) $G$ has Davenport constant 17, which means some non-empty subsequence has total zero. This corresponds to a non-empty product of the factors being a perfect cube.<|endoftext|>
TITLE: Faster way to find Taylor series
QUESTION [13 upvotes]: I'm trying to figure out if there is a better way to teach the following Taylor series problem. I can do the problem myself, but my solution doesn't seem very nice!
Let's say I want to find the first $n$ terms (small $n$ - say 3 or 4) in the Taylor series for
$$
f(z) = \frac{1}{1+z^2}
$$
around $z_0 = 2$ (or more generally around any $z_0\neq 0$, to make it interesting!) Obviously, two methods that come to mind are 1) computing the derivatives $f^{(n)}(z_0)$, which quickly turns into a bit of a mess, and 2) making a change of variables $w = z-z_0$, then computing the power series expansion for
$$
g(w) = \frac{1}{1+(w+z_0)^2}
$$ and trying to simplify it, which also turns into a bit of a mess. Neither approach seems particularly rapid or elegant. Any thoughts?
REPLY [13 votes]: Let $g(w) = \sum_{n=0}^{\infty} a_n w^n$.
Then
$(w^2+4w+5) \; g(w) = 1$ implies
$$\begin{align}
5 a_0 &= 1 \\
4 a_0 + 5 a_1 &= 0 \\
a_0 + 4 a_1 + 5 a_2 &= 0 \\
a_1 + 4 a_2 + 5 a_3 &= 0 \\
\text{etc.}
\end{align}$$
which you can then solve for the $a_n$'s in a stepwise fashion.<|endoftext|>
TITLE: Find $\sum_{m=0}^n\ (-1)^m m^n {n \choose m}$
QUESTION [7 upvotes]: I'm going to university in October and thought I'd have a go at a few questions from one of their past papers. I have completed the majority of this question but I'm stuck on the very last part. In honesty I've been working on this paper a while now and I'm a bit tired so I'm probably giving up earlier than I usually would.
I won't write out the full question, only the last part:
Let $$S_r(n) = \sum_{m=0}^n\ (-1)^m m^r {n \choose m}$$ where r is a
non-negative integer . Show that $S_r(n)=0$ for $r
TITLE: Octonionic formula for the ternary eight-dimensional cross product
QUESTION [9 upvotes]: A cross product is a multilinear map $X(v_1,\cdots,v_r)$ on a $d$-dimensional oriented inner product space $V$ for which (i) $\langle X(v_1,\cdots,v_r),w\rangle$ is alternating in $v_1,\cdots,v_r,w$ and (ii) the magnitude $\|X(v_1,\cdots,v_r)\|$ equals the $r$-dimensional volume of the parallelotope spanned by $v_1,\cdots,v_r$.
Condition (i) is equivalent to saying $X(v_1,\cdots,v_r)$ is perpendicular to each one of $v_1,\cdots,v_r$, and condition (ii) is algebraically given in terms of the grammian determinant:
$$\|X(v_1,\cdots,v_r)\|^2=\det\begin{bmatrix}\langle v_1,v_1\rangle & \cdots & \langle v_1,v_r\rangle \\ \vdots & \ddots & \vdots \\ \langle v_r,v_1\rangle & \cdots & \langle v_r,v_r\rangle\end{bmatrix} $$
An orthogonal transformation $g\in\mathrm{O}(V)$ may be applied to $X$ via the formula
$$ (g\cdot X)(v_1,\cdots,v_r):=gX(g^{-1}v_1,\cdots,g^{-1}v_r).$$
In this way, $\mathrm{O}(V)$ acts on the moduli space of cross products on $V$ of a given type.
It's a relatively simple matter to classify cross products of type $(r,d)$ when $r\ge d-1$ or $r\le 1$, and for any type $(r,d)$ defined on $V$ one may define a type $(r-1,d-1)$ on the oriented orthogonal complement of a unit $v\in V$ by fixing $v_r=v$ in $X(v_1,\cdots,v_r)$. The binary cross products ($r=2$) correspond to composotion algebras $A$: for pure imaginary $u,v\in A$ we have the multiplication rule $uv=-\langle u,v\rangle+u\times v$ (and one can use this to construct $A$ from $\times$).
So the octonions $\mathbb{O}$ give rise to a cross product of type $(2,7)$. It's symmetry group is $G_2=\mathrm{Aut}(\mathbb{O})$, which is a rather awkward kind of symmetry (and small compared to $\mathrm{SO}(8)$). But it's the shadow of a type $(3,8)$ one with much nicer symmetry group $\mathrm{Spin}(7)\hookrightarrow\mathrm{SO}(8)$ (see L690).
To understand this latter symmetry group: the clifford algebra $\mathrm{Cliff}(V)$ is the tensor algebra $T(V)$ modulo the relations $v^2=-1$ for all unit $v\in V$, and the spin group $\mathrm{Spin}(V)$ is the group comprised of products of evenly many unit vectors of $V$. In $\mathbb{O}$, pure imaginary unit elements are square roots of $-1$, so there is the following action of $\mathrm{Spin}(\mathrm{Im}(\mathbb{O}))$ on $\mathbb{O}$:
$$(u_1\cdot u_2\cdots u_{2k-1}\cdot u_{2k})\,v=u_1(u_2(\cdots u_{2k-1}(u_{2k}v)\cdots)). $$
A formula for the ternary cross product on $\mathbb{O}$ is $X(a,b,c)=\frac{1}{2}[a(\overline{b}c)-c(\overline{b}a)]$. The only place I've been able to find this (or any) octonionic formula for it is here. Where does it come from?
Before I found that formula, I tried to create my own. I reasoned that if $X(a,b,c)$ restricts to the binary one on $\mathrm{Im}(\mathbb{O})$ then we at least know $X(1,b,c)=\mathrm{Im}(\mathrm{Im}(b)\mathrm{Im}(c))$. Then I figured to evaluate $X(a,b,c)$, we can rotate the "frame" $\{a,b,c\}$ to $\{|a|,\circ,\circ\}$ via some rotation, then apply $X$, then rotate back. There is a canonical rotation sending $a$ to $1$, namely left multiplication by $\overline{a}/|a|$, so I wrote out the formula
$$X(a,b,c)=a\,\mathrm{Im}\left(\mathrm{Im}\left(\frac{\overline{a}}{|a|}b\right)\mathrm{Im}\left(\frac{\overline{a}}{|a|}c\right)\right).$$
I've verified that my $X(a,b,c)$ has the correct magnitude, is perpendicular to $a,b,c$, and is alternating and linear in $b$ and $c$, but I wouldn't know how to show it's linear in $a$ (or alternating in $a,b$, say, or cyclically symmetric in $a,b,c$). Through some laborious calculations I was able to determine the difference between my $X$ and their $X$ is the associator $[\overline{a},b,\overline{a}c]$, so they're not quite the same. One nice thing about my formula is (besides having a heuristic backstory), it looks like it might be amenable to showing $\mathrm{Spin}(7)$ symmetry.
Is there anything salvagable in my formula or its "derivation"? If not, what then is the backstory behind the given formula at the link? Ultimately, at the end of the day, I'd like: the octonionic formula for ternary cross product, a plausible story about how I could have discovered the formula on a stranded island from scratch, and a direction to go in to start seeing the $\mathrm{Spin}(7)$ symmetry. That story is already written some by the information I've provided.
REPLY [2 votes]: First of all, let's say we make the middle argument of $X(\cdot,\cdot,\cdot)$ the "special one," I suppose for symmetry's sake. We know that $X(a,1,c)$ should be the usual binary cross product on $\mathrm{Im}(\mathbb{O})$, which has the formula $a\times c=\frac{1}{2}[ac-ca]$ when $a,c$ are imaginary. Since that formula depends only on the imaginary parts of $a,c$ and the same should go for $X(a,1,c)$, we can extend that formula so that it holds for all $a,c$.
Let $G\subseteq\mathrm{O}(V)$ be the symmetry group of $X$. Ideally, we want it to act transitively on the unit sphere $S^7\subseteq\mathrm{Im}(\mathbb{O})$, in which case for all unit octonions $b$ there should be a $g\in G$ with the property $g^{-1}b=1$, in which case $X(a,b,c)=gX(g^{-1}a,1,g^{-1}c)$ can be evaluated using the formula. We don't know what $G$ is, but there is a canonical element of $\mathrm{O}(V)$ that rotates $1$ to $b$, namely (say left) multiplication by $b$. Checking $bX(b^{-1}a,1,b^{-1}c)$ gives
$$ \frac{1}{2}b\left[(\overline{b}a)(\overline{b}c)-(\overline{b}c)(\overline{b}a)\right]. $$
Unfortunately, the desired simplification $b[(\overline{b}a)(\overline{b}c)]\to a(\overline{b}c)$, while seemingly begging to be true, is not valid. The Moufang identities do not help since $b\ne\overline{b}$.
The idea can be augmented though. We already know the value of $X(a,b,c)$ when $b$ is real, so we need to know its value when $b$ is imaginary. Now when we apply the above idea (in which case left multiplication by $b$ corresponds to an element of $\mathrm{Pin}(\mathrm{Im}(\mathbb{O}))$ acting) we have $\overline{b}=-b$ in which case we can simplify $b((ba)(bc))$ by writing $x=bab^{-1}$ and $y=bc$ so it becomes
$$ b((ba)(bc))=b((xb)y)=(bxb)y=-a(bc). $$
Therefore, we get
$$ X(a,b,c)=-\frac{1}{2}\left[a(bc)-c(ba)\right]$$
when $b$ is purely imaginary. In general, when we split $b$ inside $X(a,b,c)$ into real and imaginary parts, we wind up with
$$ X(a,b,c)=\frac{1}{2}\left[a(\overline{b}c)-c(\overline{b}a)\right].$$
The nice thing about this is that $\mathrm{Pin}(7)$-symmetry is built right into the motivation behind the formula. It's easy to check that $\mathrm{Pin}(7)$ stabilizes this, but I don't know how to prove it's the full symmetry group. In any case, checking this is a cross product at this point should be comparatively straightforward.<|endoftext|>
TITLE: Why use geometric algebra and not differential forms?
QUESTION [10 upvotes]: This is somewhat similar to Are Clifford algebras and differential forms equivalent frameworks for differential geometry?, but I want to restrict discussion to $\mathbb{R}^n$, not arbitrary manifolds.
Moreover, I am interested specifically in whether
$$(\text{differential forms on }\mathbb{R}^n\text{ + a notion of inner product defined on them}) \simeq \text{geometric algebra over }\mathbb{R}^n$$
where the isomorphism is as Clifford algebras. (I.e., is geometric algebra just the description of the algebraic properties of differential forms when endowed with a suitable notion of inner product?)
1. Is any geometric algebra over $\mathbb{R}^n$ isomorphic to the exterior algebra over $\mathbb{R}^n$ in the following senses:
as a vector space? (Should be yes.)
as an exterior algebra?
(Obviously they are not isomorphic as Clifford algebras unless our quadratic form is the zero quadratic form.)
Since the basis of the geometric algebra (as a vector space) is the same (or at least isomorphic to) the basis of the exterior algebra over $\mathbb{R}^n$, the answer seems to be yes. Also because the standard embedding of any geometric algebra over $\mathbb{R}^n$ into the tensor algebra over $\mathbb{R}^n$ always "piggybacks" on the embedding of the exterior algebra over $\mathbb{R}^n$, see this MathOverflow question.
2. Are differential forms the standard construction of an object satisfying the algebraic properties of the exterior algebra over $\mathbb{R}^n$?
3. Does the answers to 1. and 2. being yes imply that the part in yellow is true?
EDIT: It seems like the only problem might be that differential forms are covariant tensors, whereas I imagine that multivectors are generally assumed to be contravariant. However, distinguishing between co- and contravariant tensors is a standard issue in tensor analysis, so this doesn't really seem like an important issue to me.
Assuming that I am reading this correctly, it seems like the elementary construction of the geometric algebra with respect to the standard inner product over $\mathbb{R}^n$ given by Alan MacDonald here is exactly just the exterior algebra over $\mathbb{R}^n$ with inner product.
David Hestenes seems to try and explain some of this somewhat here and here, although I don't quite understand what he is getting at.
(Also his claim in the first document that matrix algebra is subsumed by geometric algebra seems completely false, since he only addresses those aspects which relate to alternating tensors.)
REPLY [5 votes]: I just want to point out that GA can be used to make covariant multivectors (or differential forms) on $\mathbb R^n$ without forcing a metric onto it. In other words, the distinction between vectors and covectors (or between $\mathbb R^n$ and its dual) can be maintained.
This is done with a pseudo-Euclidean space $\mathbb R^{n,n}$.
Take an orthonormal set of spacelike vectors $\{\sigma_i\}$ (which square to ${^+}1$) and timelike vectors $\{\tau_i\}$ (which square to ${^-}1$). Define null vectors
$$\Big\{\nu_i=\frac{\sigma_i+\tau_i}{\sqrt2}\Big\}$$
$$\Big\{\mu_i=\frac{\sigma_i-\tau_i}{\sqrt2}\Big\};$$
they're null because
$${\nu_i}^2=\frac{{\sigma_i}^2+2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)+2(0)+({^-}1)}{2}=0$$
$${\mu_i}^2=\frac{{\sigma_i}^2-2\sigma_i\cdot\tau_i+{\tau_i}^2}{2}=\frac{(1)-2(0)+({^-}1)}{2}=0.$$
More generally,
$$\nu_i\cdot\nu_j=\frac{\sigma_i\cdot\sigma_j+\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j+\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})+0+0+({^-}\delta_{i,j})}{2}=0$$
and
$$\mu_i\cdot\mu_j=0.$$
So the spaces spanned by $\{\nu_i\}$ or $\{\mu_i\}$ each have degenerate quadratic forms. But the dot product between them is non-degenerate:
$$\nu_i\cdot\mu_i=\frac{\sigma_i\cdot\sigma_i-\sigma_i\cdot\tau_i+\tau_i\cdot\sigma_i-\tau_i\cdot\tau_i}{2}=\frac{(1)-0+0-({^-}1)}{2}=1$$
$$\nu_i\cdot\mu_j=\frac{\sigma_i\cdot\sigma_j-\sigma_i\cdot\tau_j+\tau_i\cdot\sigma_j-\tau_i\cdot\tau_j}{2}=\frac{(\delta_{i,j})-0+0-({^-}\delta_{i,j})}{2}=\delta_{i,j}$$
Of course, we could have just started with the definition that $\mu_i\cdot\nu_j=\delta_{i,j}=\nu_i\cdot\mu_j$, and $\nu_i\cdot\nu_j=0=\mu_i\cdot\mu_j$, instead of going through "spacetime".
The space $V$ will be generated by $\{\nu_i\}$, and its dual $V^*$ by $\{\mu_i=\nu^i\}$. You can take the dot product of something in $V^*$ with something in $V$, which will be a differential 1-form. You can make contravariant multivectors from wedge products of things in $V$, and covariant multivectors from wedge products of things in $V^*$.
You can also take the wedge product of something in $V^*$ with something in $V$.
$$\mu_i\wedge\nu_i=\frac{\sigma_i\wedge\sigma_i+\sigma_i\wedge\tau_i-\tau_i\wedge\sigma_i-\tau_i\wedge\tau_i}{2}=\frac{0+\sigma_i\tau_i-\tau_i\sigma_i-0}{2}=\sigma_i\wedge\tau_i$$
$$\mu_i\wedge\nu_j=\frac{\sigma_i\sigma_j+\sigma_i\tau_j-\tau_i\sigma_j-\tau_i\tau_j}{2},\quad i\neq j$$
What does this mean? ...I suppose it could be a matrix (a mixed variance tensor)!
A matrix can be defined as a bivector:
$$M = \sum_{i,j} M^i\!_j\;\nu_i\wedge\mu_j = \sum_{i,j} M^i\!_j\;\nu_i\wedge\nu^j$$
where each $M^i_j$ is a scalar. Note that $(\nu_i\wedge\mu_j)\neq{^-}(\nu_j\wedge\mu_i)$, so $M$ is not necessarily antisymmetric. The corresponding linear function $f:V\to V$ is (with $\cdot$ the "fat dot product")
$$f(x) = M\cdot x = \frac{Mx-xM}{2}$$
$$= \sum_{i,j} M^i_j(\nu_i\wedge\mu_j)\cdot\sum_k x^k\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j-\mu_j\nu_i}{2}\cdot\nu_k$$
$$= \sum_{i,j,k} M^i_jx^k\frac{(\nu_i\mu_j)\nu_k-\nu_k(\nu_i\mu_j)-(\mu_j\nu_i)\nu_k+\nu_k(\mu_j\nu_i)}{4}$$
(the $\nu$'s anticommute because their dot product is zero:)
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i\mu_j\nu_k+\nu_i\nu_k\mu_j+\mu_j\nu_k\nu_i+\nu_k\mu_j\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\nu_k+\nu_k\mu_j)+(\mu_j\nu_k+\nu_k\mu_j)\nu_i}{4}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\mu_j\cdot\nu_k)+(\mu_j\cdot\nu_k)\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\frac{\nu_i(\delta_{j,k})+(\delta_{j,k})\nu_i}{2}$$
$$= \sum_{i,j,k} M^i_jx^k\big(\delta_{j,k}\nu_i\big)$$
$$= \sum_{i,j} M^i_jx^j\nu_i$$
This agrees with the conventional definition of matrix multiplication.
In fact, it even works for non-square matrices; the above calculations work the same if the $\nu_i$'s on the left in $M$ are basis vectors for a different space. A bonus is that it also works for a non-degenerate quadratic form; the calculations don't rely on ${\mu_i}^2=0$, nor ${\nu_i}^2=0$, but only on $\nu_i$ being orthogonal to $\nu_k$, and $\mu_j$ being reciprocal to $\nu_k$. So you could instead have $\mu_j$ (the right factors in $M$) be in the same space as $\nu_k$ (the generators of $x$), and $\nu_i$ (the left factors in $M$) in a different space. A downside is that it won't map a non-degenerate space to itself.
I admit that this is worse than the standard matrix algebra; the dot product is not invertible, nor associative. Still, it's good to have this connection between the different algebras. And it's interesting to think of a matrix as a bivector that "rotates" a vector through the dual space and back to a different point in the original space (or a new space).
Speaking of matrix transformations, I should discuss the underlying principle for "contra/co variance": that the basis vectors may vary.
We want to be able to take any (invertible) linear transformation of the null space $V$, and expect that the opposite transformation applies to $V^*$. Arbitrary linear transformations of the external $\mathbb R^{n,n}$ will not preserve $V$; the transformed $\nu_i$ may not be null. It suffices to consider transformations that preserve the dot product on $\mathbb R^{n,n}$. One obvious type is the hyperbolic rotation
$$\sigma_1\mapsto\sigma_1\cosh\phi+\tau_1\sinh\phi={\sigma_1}'$$
$$\tau_1\mapsto\sigma_1\sinh\phi+\tau_1\cosh\phi={\tau_1}'$$
$$\sigma_2={\sigma_2}',\quad\sigma_3={\sigma_3}',\quad\cdots$$
$$\tau_2={\tau_2}',\quad\tau_3={\tau_3}',\quad\cdots$$
(or, more compactly, $x\mapsto\exp(-\sigma_1\tau_1\phi/2)x\exp(\sigma_1\tau_1\phi/2)$ ).
The induced transformation of the null vectors is
$${\nu_1}'=\frac{{\sigma_1}'+{\tau_1}'}{\sqrt2}=\exp(\phi)\nu_1$$
$${\mu_1}'=\frac{{\sigma_1}'-{\tau_1}'}{\sqrt2}=\exp(-\phi)\mu_1$$
$${\nu_2}'=\nu_2,\quad{\nu_3}'=\nu_3,\quad\cdots$$
$${\mu_2}'=\mu_2,\quad{\mu_3}'=\mu_3,\quad\cdots$$
The vector $\nu_1$ is multiplied by some positive number $e^\phi$, and the covector $\mu_1$ is divided by the same number. The dot product is still ${\mu_1}'\cdot{\nu_1}'=1$.
You can get a negative multiplier for $\nu_1$ simply by the inversion $\sigma_1\mapsto{^-}\sigma_1,\quad\tau_1\mapsto{^-}\tau_1$; this will also negate $\mu_1$. The result is that you can multiply $\nu_1$ by any non-zero Real number, and $\mu_1$ will be divided by the same number.
Of course, this only varies one basis vector in one direction. You could try to rotate the vectors, but a simple rotation in a $\sigma_i\sigma_j$ plane will mix $V$ and $V^*$ together. This problem is solved by an isoclinic rotation in $\sigma_i\sigma_j$ and $\tau_i\tau_j$, which causes the same rotation in $\nu_i\nu_j$ and $\mu_i\mu_j$ (while keeping them separate).
Combine these stretches, reflections, and rotations, and you can generate any invertible linear transformation on $V$, all while maintaining the degeneracy ${\nu_i}^2=0$ and the duality $\mu_i\cdot\nu_j=\delta_{i,j}$. This shows that $V$ and $V^*$ do have the correct "variance".
See also Hestenes' Tutorial, page 5 ("Quadratic forms vs contractions").<|endoftext|>
TITLE: A Contour Integral
QUESTION [5 upvotes]: I'm interested in computing the integral:
$$ - \frac{1}{2 \pi} \int_{- \infty}^{\infty} dE \; \frac{e^{-iEt}}{E^2 - \omega^2 + i\epsilon}. $$
I have two small queries:
How does one choose the relevant contour while deciding to the integration. For example, the solution to the problem argues:
If $t > 0$, we can add an integral along an
arc at infinity in the lower half complex $E$ plane, since $e^{-iEt}$ vanishes on this arc.
I'm not quite sure how has one pinned down the contour and what does it exactly mean to 'add an integral' to the original integral at hand. If someone could argue on how to approach solving the integral by choosing the relevant contonour (why?), that'd be great.
For one of the poles, the solution states that:
By the
residue theorem, the value of the integral is $−2πi$ times this residue.
Where does the minus sign come from? Doesn't the residue theorem state that the value of the integral on a closed contour enclosing a pole is $2πi$ times this residue?
Thanks.
REPLY [3 votes]: The poles of the integrand are not at $E=\pm (\omega - i\epsilon)$. Rather, the poles are located at $E=\pm \sqrt{\omega^2 - i\epsilon}$.
Now, the Residue Theorem guarantees that
$$\oint_C \frac{e^{-iEt}}{E^2-\omega^2+i\epsilon}\,dE=2\pi i \sum_{\text{Residues}}\left(\frac{e^{-iEt}}{E^2-\omega^2+i\epsilon}\right) $$
where $C$ is a Counter Clockwise closed contour encircling zero, one, or two of the poles of the integrand.
Now, if $t<0$, we let $C$ be the contour comprised of (i) the real line segment from $-R$ to $R$ and (ii) the semicircle in the upper-half plane. This contour is traversed counter clockwise. Therefore, we have
$$\begin{align}
\oint_C \frac{e^{-iEt}}{E^2-\omega^2+i\epsilon}\,dE&=\int_{-R}^R\frac{e^{-iEt}}{E^2-\omega^2+i\epsilon}\,dE+\int_0^\pi \frac{e^{-iRe^{i\phi}}}{R^2e^{i2\phi}}\,iRe^{i\phi}\,d\phi \tag 1\\\\
&=2\pi i \frac{e^{i\sqrt{\omega^2-i\epsilon}\,t}}{2\sqrt{\omega^2 -i\epsilon}}
\end{align}$$
As $R\to \infty$, the second integral on the right-hand side of $(2)$ vanishes thereby revealing for $t<0$
$$\int_{-\infty}^\infty\frac{e^{-iEt}}{E^2-\omega^2+i\epsilon}\,dE=i \pi \frac{e^{i\sqrt{\omega^2-i\epsilon}\,t}}{\sqrt{\omega^2 -i\epsilon}}$$
where we have tacitly defined the square root on the principal branch (The pole at $E=-\sqrt{\omega^2 - i\epsilon}$ is in the upper-half plane).
Now, for $t>0$, we proceed analogously, but closing the plane in the lower half plane. Note in this case, the contour is traversed in a clockwise sense and therefore, we must replace $2\pi i $ with $-2\pi i$ in applying the residue theorem.<|endoftext|>
TITLE: Summation with combinations
QUESTION [11 upvotes]: Prove that $n$ divides $$\sum_{d \mid \gcd(n,k)} \mu(d) \binom{n/d}{k/d}$$ for every natural number $n$ and for every $k$ where $1 \leq k \leq n.$ Note: $\mu(n)$ denotes the Möbius function.
I have tried numerous values for this summation and the result seems to hold true. For example, if $n = 20, k = 12$
$$\sum_{d \mid 4} \mu(d) \binom{20/d}{12/d} = \mu(1) \binom{20}{12}+\mu(2) \binom{10}{6}+\mu(4)\binom{5}{3} = \binom{20}{12}-\binom{10}{6}=125760,$$ which is divisible by $20$. Similarly if we tried it for any $k$ with $1 \leq k\leq 20$, we would have $20$ divide the expression.
How do I prove this result in the general case? That is, given any positive integer $n$, for all $k$ with $1 \leq k \leq n$ $$n \mid \sum_{d \mid \gcd(n,k)} \mu(d) \binom{n/d}{k/d}.$$
REPLY [5 votes]: I would like to present the algebraic aspects of this problem to
facilitate understanding. Suppose we have $r$ types of objects
(e.g. colors) with $k_1 + k_2 + \cdots + k_r = n$ objects total where
$k_q$ gives the number of objects of color $q$ and we ask about the
number of necklaces we can form with these (rotational symmetry as
opposed to dihedral symmetry).
Applying the Polya Enumeration Theorem (PET) we have the cycle
index of the cyclic group
$$Z(C_n) = \frac{1}{n}\sum_{d|n} \varphi(d) a_d^{n/d}.$$
PET now yields for the generating function of necklaces using at most
$r$ colors
$$q_n = \frac{1}{n}\sum_{d|n}
\varphi(d) (A_1^d + A_2^d +\cdots+A_r^d)^{n/d}.$$
We now introduce the concept of primitive necklaces $p_n$ i.e.
necklaces on at most $r$ colors not having any rotational
symmetry. Observe that an ordinary necklace is formed by concatenating
$d$ copies of a primitive necklace of size $n/d.$ (In fact it does not
matter where we open the primitive necklace ($n/d$ possibilities)
because when we arrange the copies of the opened necklace we always
get the same necklace regardless of where we opened the primitive
necklace.)
We will use a variety of Moebius inversion which is
Inclusion-Exclusion on the divisor poset in order to compute $p_n$ (a
generating function) and extract the desired coefficient. The possible
symmetries that can occur correspond to the divisors $f$ of $n$
($f|n$).
Using the variable $f$ we obtain as explained a segment of length
$n/f$ being repeated $f$ times, copies being placed next to each
other, thus creating $n/f$ cycles of length $f.$. These segments are
themselves necklaces of length $n/f.$ This means that the maximal
symmetry (smallest size of the constituent cycles) is a divisor of
$n/f$ because the segment could itself be a concatenation of repeated
segments. Ordering these in a poset by division yields an upside-down
instance of the divisor poset of $n$. Note that the generating
function for the contribution from $f$ is not
$$\frac{1}{n/f}\sum_{d|n/f}
\varphi(d) (A_1^d + A_2^d + \cdots + A_r^d)^{n/f/d}.$$
but rather
$$\frac{1}{n/f}\sum_{d|n/f}
\varphi(d) (A_1^{df} + A_2^{df} + \cdots + A_r^{df})^{n/f/d}.$$
which represents the $f$ copies of the source segment.
We thus obtain by Inclusion-Exclusion
$$\sum_{f|n} \mu(f)
\frac{f}{n}\sum_{d|n/f}
\varphi(d) (A_1^{df} + A_2^{df} + \cdots + A_r^{df})^{n/f/d}.$$
We put $fd=k$ so that $d=k/f$ to get
$$\sum_{f|n} \mu(f)
\frac{f}{n}\sum_{k/f|n/f}
\varphi(k/f) (A_1^{k} + A_2^{k} + \cdots + A_r^{k})^{n/k}
\\ = \frac{1}{n}
\sum_{f|n} f\mu(f)
\sum_{k|n \wedge f|k}
\varphi(k/f) (A_1^{k} + A_2^{k} + \cdots + A_r^{k})^{n/k}
\\ = \frac{1}{n}
\sum_{k|n} (A_1^{k} + A_2^{k} + \cdots + A_r^{k})^{n/k}
\sum_{f|k} f\mu(f) \varphi(k/f).$$
There are several ways to simplify the term
$$\sum_{f|k} f\mu(f) \varphi(k/f).$$
E.g. note that if
$$L_1(s) = \sum_{n\ge 1} \frac{n\mu(n)}{n^s}
= \prod_p \left(1-\frac{p}{p^s}\right) = \frac{1}{\zeta(s-1)}$$
and
$$L_2(s) = \sum_{n\ge 1} \frac{\varphi(n)}{n^s} =
\frac{\zeta(s-1)}{\zeta(s)}
\quad\text{because}\quad
\sum_{n\ge 1} \frac{1}{n^s} \sum_{d|n} \varphi(d)
= \zeta(s-1)$$
then
$$L_1(s) L_2(s) = \frac{1}{\zeta(s)}
\quad\text{and hence}\quad
\sum_{f|k} f\mu(f) \varphi(k/f) = \mu(k).$$
Substitute this into the formula to obtain
$$\frac{1}{n}
\sum_{k|n} \mu(k) (A_1^{k} + A_2^{k} +\cdots + A_r^{k})^{n/k}.$$
We seek
$$[A_1^{k_1} A_2^{k_2} \cdots A_r^{k_r}] \frac{1}{n}
\sum_{k|n} \mu(k) (A_1^{k} + A_2^{k} +\cdots + A_r^{k})^{n/k}.$$
Now observe that the term in the variables only produces powers that
are multiples of $k$ so we get the condition that
$$k|\gcd(n, k_1, k_2, \ldots k_r)$$
(we see that this produces a divisor of $n$) in which case we obtain a
contribution of (using $d$ for $k$ for readability)
$${n/d\choose k_1/d, k_2/d,\ldots k_r/d}$$
for an end result of
$$\frac{1}{n}\sum_{d|\gcd(k_1, k_2, \ldots k_r)}
\mu(d) {n/d\choose k_1/d, k_2/d,\ldots k_r/d}.$$
We now conclude by inspection that the sum is indeed a multiple of
$n.$
A similar problem appeared at this
MSE link.<|endoftext|>
TITLE: Examples of asymmetrically braided monoid
QUESTION [7 upvotes]: From nCatlab https://ncatlab.org/nlab/show/braiding :
Any braided monoidal category has a natural isomorphism
$$B_{x,y} \;\colon\; x \otimes y \to y \otimes x $$
called the braiding.
A braided monoidal category is symmetric if and only if $B_{x,y}$ and $B_{y,x}$ are inverses (although they are isomorphisms regardless).
This all makes sense, but I'm struggling to think of an instance where you would want to work with an asymmetric braiding. It's plain to me that they can exist, but ... are there any useful examples?
I got to this page from https://ncatlab.org/nlab/show/associative+unital+algebra where it was stating
Moreover, if $(\mathcal{C}, \otimes , 1)$ has the structure of a symmetric monoidal category $(\mathcal{C}, \otimes, 1, B)$ with symmetric braiding $\tau$, then a monoid $(A,\mu, e)$ as above is called a commutative monoid in $(\mathcal{C}, \otimes, 1, B)$ if in addition... [diagram here]
I was also wondering if this was necessary, the symmetry in the braiding. If the braiding was asymmetric, but $\mu \circ B_{x,y} = \mu = \mu \circ (B_{y,x})^{-1}$, we can still make sense of the multiplication being commutative. It seems we could make useful statements about the algebra, even with a strange braiding like. Are there any examples of this, either?
REPLY [5 votes]: Let $A$ be an associative algebra. You probably know about the center
$$Z(A) = \{ a \in A \mid \forall b \in A, ab = ba \},$$
the set of elements that commute with all the others. It's easy to show that this is a commutative subalgebra of $A$. It's a rather interesting construction to study.
Now consider a monoidal category $\mathcal{C}$, morally an "associative algebra in categories". It turns out that some notion of center exists for $\mathcal{C}$, called the Drinfeld center. Morally, it is still "the elements that commute with all the others", i.e. the objects $X$ such that for all $Y$, $X \otimes Y \cong Y \otimes X$.
The problem is that as usual in category theory, we're looking for natural stuff, and so it's better to consider pairs $(X, \Phi)$ where $X$ is an object and $\Phi$ is "a way of commuting $X$ with all the other elements", i.e. a natural transformation $X \otimes - \to - \otimes X$ – a so-called half-braiding. It turns out that given some $X$ that "commutes with all the other objects", there may be multiple different ways of making it commute with all the other objects. A morphism between two pairs is the obvious things, and this forms a category, call it $\mathscr{Z}(\mathcal{C})$, the Drinfeld center of $\mathcal{C}$.
Then it's a theorem of Drinfeld, Majid, and Joyal–Street* that $\mathscr{Z}(\mathcal{C})$ is canonically a braided monoidal category, and not a symmetric monoidal category as one might expect. If $\mathcal{C}$ is a category with one object, i.e. a monoid, then we recover the usual notion of center of a monoid, and this is symmetric monoidal. So going from monoids to categories, we've lost (or gained?) something: the center of a category is not symmetric anymore but braided.
* If I understand correctly, it was developed independently by Drinfeld and by Joyal–Street, but Drinfeld initially didn't publish anything about it and it appeared in a paper of Majid who attributed it to a private communication by Drinfeld. I'd be happy to hear a more detailed historical account of this, TBH.<|endoftext|>
TITLE: Bridges across a tiled floor
QUESTION [6 upvotes]: A few years back, a friend of mine did a seminar on "Bridges across a tiled floor". A "bridge" was defined as a row or column of an $n \times n$ binary matrix consisting entirely of $1$'s, for example the third column and fourth row of
\begin{bmatrix}
1&0&1&0 \\
0&0&1&0 \\
0&1&1&1 \\
1&1&1&1
\end{bmatrix}
The problem is to find the probability of selecting an $n\times n$ binary matrix with at least one bridge, when selecting from all $n\times n$ binary matrices. My friend made an algorithm using Markov chains for calculating it for a given $n$, but we never found a closed formula. I was wondering if there was a simple approach, or if anyone knows how to find the solution.
I made several attempts. My first attempt was to try a purely combinatorial solution, but the interconnectivity made it a bit ridiculous. I tried to solve the complementary problem by placing $0$'s on the main diagonal, permuting them, and considering all other choices for the other entries, but this resulted in multiple ways of attaining the same matrix. I tried solving the simpler problems of only column bridges or row bridges, which had simple solutions, but combining them proved difficult. And most recently (which I haven't fully fleshed out), I tried setting up a recursive relationship from the $n-1$ case to the $n$ case.
Any insight would be greatly appreciated.
REPLY [2 votes]: Let $F(a,b)$ be the number with $a$ specific horizontal bridges and $b$ specific vertical bridges. The other rows and columns may or may not be bridges. Then the number of unaffected squares is $(n-a)(n-b)$ so $$F(a,b)=2^{(n-a)(n-b)}$$
Let $B(n)$ be the number of bridged arrangements.
Now, do inclusion-exclusion:
Start with single bridges: There are $F(1,0)$ a bridge on the first row, another $F(1,0)$ with a bridge on the second row, and so on, so $nF(1,0)$ in all with a horizontal bridge (counting repetitions). There are $F(0,1)$ with a bridge on the first column, etc, so another $nF(0,1)$ with a vertical bridge.
$nF(1,0)+nF(0,1)$.
Two-bridge patterns have been counted twice, so that number must be subtracted. If both bridges are horizontal: there are $n\choose2$ pairs of bridges, each pair has $F(2,0)$ patterns. If both bridges are vertical, there are another ${n\choose2}F(0,2)$ patterns. If one is vertical and the other horizontal, there are $n$ choices for the horizontal one and $n$ choices for the vertical one. In every case, there are $F(1,1)$ patterns.
Subtract ${n\choose2}F(2,0)+{n\choose1}{n\choose1}F(1,1)+{n\choose2}F(0,2)$
Three-bridge patterns need to be added back in:
Add ${n\choose3}F(3,0)+{n\choose2}{n\choose1}F(2,1)+{n\choose1}{n\choose2}F(1,2)+{n\choose3}F(0,3)$
etc...
The total with no bridges has a slightly simpler formula,
and that sum will be $$2^{n^2}-B(n)=\sum_{i=0}^n\sum_{j=0}^n(-1)^{i+j}{n\choose i}{n\choose j}2^{(n-i)(n-j)}$$
The symmetry with $(i,j)$ replaced by $(n-i,n-j)$ gives
$$\begin{array}{rcl}2^{n^2}-B(n)&=&\sum_{i=0}^n(-1)^i{n\choose i}\sum_{j=0}^n(-1)^j{n\choose j}2^{ij}\\&=&\sum_{i=0}^n(-1)^i{n\choose i}(1-2^i)^n\end{array}$$
To check for $n=1,2,3$:
$n=1:2-B(1)=0-(-1)=1\to B(1)=1$
$n=2:16-B(2)=1.0^2-2.1^2+1.3^2=7\to B(2)=9$
$n=3:512-B(3)=1.0^3+3.1^3-3.3^3+1.7^3\to B(3)=247$<|endoftext|>
TITLE: How does the geometric product work? Inconsistent/circular?
QUESTION [9 upvotes]: I am trying to learn Geometric Algebra from the textbook by Doran and Lasenby.
They claim in chapter 4 that the geometric product $ab$ between two vectors $a$ and $b$ is defined according to the axioms
i) associativity: $(ab)c = a(bc) = abc$
ii) distributive over addition: $a(b+c) = ab+ac$
iii) The square of any vector is a real scalar
Then they claim that the inner and outer product are defined as
$ a \cdot b = \frac{1}{2} (ab+ba) $
$ a \wedge b = \frac{1}{2} (ab-ba) $
so that
$ a b = a \cdot b + a \wedge b $
My problem is that if you are given two vectors, say
$ a = 1e_1 + 3 e_2 - 2e_3 $
$ b = 5e_1 -2 e_2 + 1e_3 $
How to actually compute $ab$?
I mean, you then have to specify how either $ a \cdot b $ and $ a \wedge b $ works, or how (in detail) $ a b $ are to be performed.
This is in my view, circular.
From another point of view, say that you start backwards by defining the inner and outer product, and then define the geometric product as $ a b = a \cdot b + a \wedge b $
Then how to show that the geometric product is associative? The usual definition of the outer product is associative, but the usual definition of the inner (dot) product is NOT associative. So how to show that the geometric product is associative if you take the inner and outer product as starting point for the geometric product?
Thank you very much in advance for any kind of feedback
REPLY [2 votes]: You use axioms i, ii, and iii to compute the geometric product. The dot and wedge products can then be computed.
Consider $u = 3e_1 + 2e_2$ and $v = -e_1 + 4e_2$.
Then, by axioms (i) and (ii), we can write
$$uv = 3e_1 e_1 + 12 e_1 e_2 + 2 e_2 e_1 + 8 e_2 e_2.$$
This distributes over addition and drops parentheses, as they would be redundant or unnecessary.
Now, using axiom (iii), we can evaluate the products $e_1 e_1 = e_2 e_2 = 1$ to get
$$uv = 3 + 12 e_1 e_2 + 2 e_2 e_1 + 8.$$
Now, to simplify this further, we typically use a derived result: let $x,y$ be vectors, and consider the geometric product
$$(x+y)(x+y) = xx + xy + yx + yy$$
Let's amend axiom (iii) somewhat: it is not enough that the product of a vector with itself be a scalar. Rather, we presume the presence of some symmetric bilinear form (usually positive definite, but in pseudo-Riemannian geometry, this condition is relaxed), so there must exist some map $g: V\times V \to K$, and $xx = g(x,x)$ for any $x$.
Hence, we can consider the case in which $g(x,y) = 0$, or $x \perp y$ in other words. Then
$$(x+y)(x+y) = g(x+y, x+y) = g(x,x) + g(y,y) = xx + yy$$
Hence, we conclude that, when $x \perp y$ under $g$, $xy = -yx$. This result is often used in simplifying geometric products. Applying it to our original problem yields
$$uv = 11 + 10 e_1 e_2$$
and in turn,
$$vu = 11 - 10 e_1 e_2$$
from which the dot and wedge products can be computed.
However, unlike Doran and Lasenby, I generally prefer not to use those definitions of the products. I usually compute the dot and wedge products in terms of grade projections. The symmetry properties change with grades, which makes those rules almost meaningless in my opinion.<|endoftext|>
TITLE: Asymptotic Moments of the Binomial Distribution, $E(X/(np))^k = 1 + O(k^2/n)$?
QUESTION [5 upvotes]: Let $X \sim \text{Binomial}(n, p)$ be the sum of $n$ Bernoulli($p$) random variables.
What is the value of $E(X/(np))^k$, where $k$ is a large integer, as $n$ grows large?
From calculations the first values are
E1 = 1
E(X/np) = 1
E(X/np)^2 = 1 + r/n
E(X/np)^3 = 1 + 3r/n + ((-1+p)(-1+2p))/(np)^2
E(X/np)^4 = 1 + 6r/n + O[1/n]^2
E(X/np)^5 = 1 + 10r/n + O[1/n]^2
E(X/np)^6 = 1 + 15r/n + O[1/n]^2
E(X/np)^7 = 1 + 21r/n + O[1/n]^2
...
where $r = (1-p)/p$
So my guess would be that one could obtain a result like $E(X/(np))^k = 1 + O(k^2/n)$, but I'm not sure how I'd proceed.
For sums of $\{+1, -1\}$ random variables, I'm aware that we can bound the moments by replacing each one with a normal random variable with the same variance. However converting this result to the non central case doesn't seem obvious?
REPLY [2 votes]: The right universal bound is
$$E\left[\left(\frac{X}{np}\right)^k\right] \le \left(1+\frac{k}{2np}\right)^k.$$
For $k = O(\sqrt{np})$ this explains the $1+O(k^2/(np))$ behaviour.
I wrote a note detailing the answer here https://thomasahle.com/#paper-bi-moments .
Using the Moment Generating Function
Let $$m(t) = E[e^{t X}] = (1-p+pe^t)^n \le \exp(\mu(e^t-1)),$$ where $\mu=np$.
This last bound is the moment generation of a Poisson random variable, so the following bound will hold in that case as well.
Now $E[X^k] \le m(t)t^{-k}k!$ follows easily from the expansion $E[e^{tX}]=\sum_i E[X^i]t^i/i!$.
However, we will need the slightly stronger bound:
$$E[X^k] \le m(t)(k/(et))^k.$$
This follows from the basic inequality $1+x\le e^x$, where we substitute $tx/k-1$ for $x$ to get $tx/k \le e^{tx/k-1} \implies x^k \le e^{tx}(k/(et))^k$. Taking expectations we get the intended bound.
We define $B=k/\mu$ like above, and take $t$ such that $t e^t=B$.
(This $t$ is also known as $W(B)$, using the Lambert function.)
We then have
$$\begin{align}
E[(X/\mu)^k]
&\le
m(t)(\mu t)^{-k}(k/e)^k
\\&\le
\exp(\mu(e^t-1))\big(\frac{k}{e\mu t}\big)^k
\\&=
\exp(\mu(B/t-1)+tk-k)
\\&= \exp(k f(B))
,
\end{align}$$
where $f(B) = 1/t+t-1-1/B$.
We can bound $\exp(f(x))$ by $B/\log(B+1) \le 1+B/2 \le e^{B/2}$ using standard bounds on the Lambert function.
This finally gives the bounds
$$
\|X/\mu\|_k \le \frac{B}{\log(B+1)} \le e^{B/2}.
$$
Taking $k$ powers this means
$$
\text{and}\quad E[(X/\mu)^k] \le \exp(k^2/(2np)),
$$
where $e^{k^2/(2np)} = 1+O(k^2/(np))$ when $k = O(\sqrt{np})$.
Old attempt using norms:
I found the following interesting non-asymptotic bound:
Let $\|X\|_k = E[|X|^k]^{1/k}$ and use the triangle inequality
$$\|X\|_k \le np + \|X-np\|_k.$$
To bound the second term, we use Latala's formula for sums of iid. random variables:
$$\|X\|_k = \|\sum_i X_i - E[X_i]\|_k \lesssim \inf_{\max\{2,\frac kn\}\le s\le k} \frac ks \left(\frac nk\right)^{1/s} \|X_i - E[X_i]\|_s$$
For Bernoulli random variables with probability $p$: $\|B-p\|_k = \left(p (1-p)^k+(1-p) p^k\right)^{1/k} \le p^{1/k}$.
Let $\beta = \frac{k}{np}$.
We assume $k\le n$ and $np\ge 1$.
Optimizing in Latala's we have
$$
\|X\|_k \lesssim \begin{cases}
\sqrt{n p k} & \text{if $\beta < e$} \\
\frac{k}{\log\beta} & \text{if $\beta\ge e$}.
\end{cases}
$$
Putting it all together we have
$$
E\left[\left(\frac{X}{np}\right)^k\right] \le \begin{cases}
(1 + C\sqrt{\frac{k}{np}})^k & \text{if $\beta < e$} \\
(1 + C\frac{k}{n p\log\beta})^k & \text{if $\beta\ge e$}.
\end{cases}
$$
In particular for $k = O((n p)^{1/3})$ we get $E(\frac{X}{np})^k=1 + O(k^3/(n p))$.
This is not quite as good as the conjectured bound, but at least it is rigorous and works even when $p$ is also allowed to depend on $n$.
It also shows that the moments of $\frac{X}{np}-1$ are within a constant of the equivalent Gaussian, as long as $k < np$.
Likely one has to avoid the triangle inequality to show the stronger conjecture.<|endoftext|>
TITLE: Why is this easy "proof" of Brouwer's Fixed Point Theorem not correct/common?
QUESTION [8 upvotes]: Brouwer's Fixed Point Theorem states, essentially, that any continuous function on a closed disc to itself has a fixed point. I am familiar with the proof based on the impossibility of a retraction from a disc to its boundary and the proof based on Sperner's Lemma. Wikipedia lists a number of other proofs.
However, it seems there is a simpler "proof" - quoted because it could be wrong - that uses no fancy machinery and I'm wondering if it is right and if so, why it isn't well known.
We will prove that any continuous $f : [0, 1]^n \rightarrow [0, 1]^n$ has a fixed point by induction on $n$. $n = 1$ amounts to the Intermediate Value Theorem. For $n > 1$ our space is $[0, 1] \times [0, 1]^{n-1}$. By the 1-d case, for each $\mathbf{u} \in [0, 1]^{n-1}$, $x \mapsto f(x, \mathbf{u})_1$, the first component of $f(x, \mathbf{u})$, has a fixed point $x$. By continuity of $f$ we may choose this fixed point, $x(\mathbf{u})$, to vary continuously in $\mathbf{u}$. By the $(n-1)$-d case, for each $y \in [0, 1]$, $\mathbf{v} \mapsto f(y, \mathbf{v})_{2, \ldots, n}$ has a fixed point $\mathbf{v}$ and we may let $\mathbf{v}(y)$ vary continuously.
If $x(\mathbf{v}(0)) = 0$ then $(0, \mathbf{v}(0))$ is a fixed point of $f$; similarly for 1. Otherwise let $X = \{(x(\mathbf{u}), \mathbf{u}) \mid u \in [0, 1]^{n-1}\}$. $X$ is the graph of a continuous function so it is closed and $[0, 1]^n \setminus X$ is open. Furthermore, $\mathbf{v}(0)$ and $\mathbf{v}(1)$ are in different components of $[0, 1]^n \setminus X$ so by an argument like the proof of the Intermediate Value Theorem, $\mathbb{v}$ must cross $X$ at some point, which is a fixed point of $f$.
REPLY [13 votes]: In general, $x(\mathbf{u})$ can't be chosen to be continuous. Here is a simple counter-example in $\mathbb R^2$:
$$f(x,y) =
\begin{cases}
2xy, & \text{if } y \le \frac12 \\
1-2(1-x)(1-y), & \text{if } y \ge \frac12
\end{cases}$$
If $y < \frac12$, the unique fixed $x$ is $x=0$; if $y > \frac12$, the unique fixed $x$ is $x=1$. (If $y=\frac12$, then $f(x,y)=x$ fixes all points in $[0,1]$.)<|endoftext|>
TITLE: If $f$ has more than one root in $K$, then $f$ splits and $K/k$ is Galois?
QUESTION [6 upvotes]: Let $f \in k[x]$ be an irreducible polynomial of prime degree $p$ such that $K \cong k[x]/f(x)$ is a separable extension. How do I see that if $f$ has more than
one root in $K$, then $f$ splits and $K/k$ is Galois?
REPLY [5 votes]: Let $L\supseteq K$ be a splitting field of $f$ over $k$, and let $G=\operatorname{Gal}(L/k)$ be the Galois group. As $f$ is separable, it has $p$ distinct roots in $L$. We view $G$ as a group of permutations of the roots, so also as a subgroup of the symmetric group $S_p$. Denote by $H=\operatorname{Gal}(L/K)\le G$ the subgroup associated to the intermediate field $K$.
The order of $G$ is divisible by $[K:k]=p$ so by Cauchy's theorem there exists an element $\sigma\in G$ of order $p$. When viewed as an element of $S_p$ $\sigma$ has to be a $p$-cycle.
Let $\alpha_1,\alpha_2$ be two roots of $f(x)$ in $K$. By replacing $\sigma$ with its suitable power we can without loss of generality assume that $\sigma(\alpha_1)=\alpha_2$. Number the zeros in such a way that $\sigma(\alpha_i)=\alpha_{i+1}$, for all $i=1,2,\ldots,p-1$, $\sigma(\alpha_p)=\alpha_1$.
The claim is to prove that all the zeros $\alpha_i\in K$. Assume contrariwise that this is not the case. Then we can find three zeros $\alpha_i,\alpha_{i+1},\alpha_{i+2}$ such that $\alpha_i,\alpha_{i+1}\in K$ but $\alpha_{i+2}\notin K$.
This implies that there exists an element $\tau\in H$ such that $\tau(\alpha_{i+2})\neq\alpha_{i+2}$. Consider the automorphism $\delta=\sigma^{-1}\tau\sigma$. As $\tau$ fixes all the elements of $K$
$$
\delta(\alpha_i)=\sigma^{-1}(\tau(\alpha_{i+1}))=\sigma^{-1}(\alpha_{i+1})=\alpha_i.
$$
Therefore $\delta$ fixes the elements of $k(\alpha_i)\subseteq K$. As $[K:k]=p$ is a prime, we can conclude that there are no intermediate fields between $k$ and $K$. Hence $k(\alpha_i)=K$, and $\delta\in H$.
But,
$$
\delta(\alpha_{i+1})=\sigma^{-1}(\tau(\alpha_{i+2}))\neq\sigma^{-1}(\alpha_{i+2})=\alpha_{i+1}.
$$
So $\delta$ does not fix the element $\alpha_{i+1}$ contradicting the above, and proving the claim.