TITLE: The other trace of the curvature tensor
QUESTION [6 upvotes]: I am using index notation here, since denoting traces in index notation is easier. Einstein summation convention assumed.
If $(M,g)$ is a Riemannian or pseudo-Riemannian manifold, and $$ R^\rho_{\sigma\mu\nu}=\partial_\mu\Gamma^\rho_{\nu\sigma}-\partial_\nu\Gamma^\rho_{\mu\sigma}+\Gamma^\rho_{\mu\lambda}\Gamma^\lambda_{\nu\sigma}-\Gamma^\rho_{\nu\lambda}\Gamma^\lambda_{\mu\sigma} $$
is the Riemann curvature tensor, then the only independent trace of $R$ is the Ricci tensor $R_{\mu\nu}=R^\sigma_{\mu\sigma\nu}$, since the trace $R^\sigma_{\sigma\mu\nu}$ is zero.
If we are, on the other hand, given an arbitrary linear connection, it is necessarily a $\text{GL}(n,\mathbb{R})$-connection, and there is nothing specific to be said about the first two indices of the curvature tensor, so the tensor field $Q_{\mu\nu}=R^\sigma_{\sigma\mu\nu}$ is not necessarily zero.
What is there to be said about this tensor? What is its geometric meaning? What does it signify that for a Riemannian curvature tensor, this is zero?
I do realize that if $\nabla$ is an arbitrary $g$-compatible connection then, for and arbitrary frame $e_{a}$ (latin indices - frame indices, greek indices - coordinate indices) we have $$d^\nabla g_{ab}=0=dg_{ab}-\omega^c_{\ a}g_{cb}-\omega^c_{\ b}g_{ac},$$ so $dg_{ab}=\omega_{ba}+\omega_{ab}$, so if $e_a$ is an orthonormal frame then the connection forms are skew-symmetric, and then the curvature form $\Omega^a_{\ b}=d\omega^a_{\ b}+\omega^a_{\ c}\wedge\omega^c_{\ b}$ is also skew-symmetric. Moreover, since unlike $\omega$, $\Omega$ is gauge-covariant, this skew-symmetry is preserved even if calculated in a non-orthonormal frame.
Therefore, if $Q_{\mu\nu}$ does not vanish, then $\nabla$ cannot be metric-compatible for any metric I assume.
But I am curious about more info. Does the vanishing of $Q$ also imply that $\nabla$ is metric compatible for some metric? What else can be said about $Q$?
REPLY [7 votes]: For any affine connection $\nabla$ on a smooth manifold, the curvature $R_{ab}{}^c{}_d$ may be uniquely decomposed as
$$R_{ab}{}^c{}_d = C_{ab}{}^c{}_d + 2 \delta^c{}_{[a} {\mathsf P}_{b]d} + \beta_{ab} \delta^c{}_d \qquad (\ast)$$
for some totally tracefree $C$, called the projective Weyl tensor, and skew $\beta$; the tensor $\mathsf P$ is called the projective Schouten tensor. The First Bianchi Identity, $R_{[ab}{}^c{}_{d]} = 0$, implies that $-2 {\mathsf P}_{[ab]} = \beta_{ab}$.
Now, taking the trace over ${}^c{}_d$ gives
$$Q_{ab} := R_{ab}{}^c{}_c = -2 {\mathsf P}_{[ab]} + n \beta_{ab} = (n + 1) \beta_{ab}.$$ Then, taking the trace of $(\ast)$ over ${}^c{}_a$ implies that $Q_{ab} = -2 R_{[ab]}$, so $Q$ is, up to a constant multiple, the skew part of the Ricci curvature of $\nabla$.
A more concrete geometric interpretation is this: Computing from $(\ast)$ gives that the curvature of the connection $\nabla$ induces on the anticanonical bundle $\Lambda^n TM$ is $Q$, or equivalently that the curvature of the connection induced on the canonical bundle $\Lambda^n T^*M$ is $-Q$. Thus, this bundle locally admits parallel sections, that is, $\nabla$ (locally) preserves a volume form on $M$, iff $Q = 0$, corresponding to the fact that (by definition) $Q = 0$ iff $R$ takes values in ${\frak sl}(TM)$. In particular, the Levi-Civita connection of any metric $g$ preserves the volume form of the restriction of that metric to any open orientable subspace (endowed with either choice of orientation), so $Q = 0$ for a Levi-Civita connection, or, like you say, for any metric connection.
Expanding the Second Bianchi Identity, $\nabla_{[e} R_{ab]}{}^c{}_d$, using $(\ast)$ implies that $dQ = 0$, so $Q$ defines a second cohomology class $[Q] \in H^2(M)$. If $\nabla$ is torsion-free, any connection projectively equivalent to $\nabla$, that is, sharing the same (unparameterized) geodesics as $\nabla$, has the form $$\hat\nabla_a \xi^b = \nabla_a \xi^b + \Upsilon_a \xi^b + \Upsilon_c \xi^c \delta^b{}_a$$
for some $\Upsilon \in \Gamma(T^*M)$ (and any choice of $\Upsilon$ gives projectively equivalent connections). The corresponding tensors $Q, \hat Q$ are related by $\hat Q = Q + 2 (n + 1) d\Upsilon$, and in particular, they differ by an exact form. Thus, the cohomology class $[Q] = [\hat Q]$ is actually an invariant of the projective structure---that is, the equivalence class of projective equivalent connections---that $\nabla$ defines. On the other hand, this transformation rule for $Q$ shows that locally $\nabla$ is projectively equivalent to one with $Q = 0$ (such connections are sometimes called special). So, in the setting of local projective differential geometry, we may as well just work with special connections, which enjoy the convenient feature that ${\mathsf P}_{ab}$ and $R_{ab}$ are symmetric.
This formulation can be found, by the way, in $\S$3 of the following reference:
T. Bailey, M. Eastwood, A.R. Gover, "Thomas' structure bundle for conformal, projective and related structures" Rocky Mountain J. Math, 24 (1994), 1191-1217.
It is not true that vanishing of $Q$ implies that $\nabla$ is a Levi-Civita connection, or even that it is projectively equivalent to one. Naively one should expect as much: Vanishing of $Q$ implies that the (local) holonomy group of $\nabla$ based at any point $x$ is contained in $\textrm{SL}(T_x M)$, but if $\nabla$ is the Levi-Civita connection of a metric $g$, the holonomy group must be contained in the much smaller group $\textrm{SO}(g_x)$.
A simple example is the connection $\nabla$ on $\Bbb R^3$ whose nonzero Christoffel symbols are specified (in the canonical coordinates $(x^a)$) by $\Gamma_{21}^3 = \Gamma_{31}^2 = x^2$. (The projective structure this connection defines is the so-called Egorov projective structure, which is interesting for other reasons, too.) The nonzero components of curvature are specified by $R_{23}{}^1{}_2 = -R_{32}{}^1{}_2 = -1$, so $\nabla$ is special, but $\S$2.3 of the below reference shows that it is not a Levi-Civita connection, nor even projectively equivalent to one.
M. Dunajski, M. Eastwood, "Metrisability of three-dimensional path geometries", European J. Math (2015), 809-834.<|endoftext|>
TITLE: Classical version and idelic version of class field theory
QUESTION [6 upvotes]: Last semester, I took a course about class field theory and I learned about Artin reciprocity, which gives a map from ideal class group to Galois group,
$$
\left(\frac{L/K}{\cdot}\right):I_{K}\to Gal(L/K), \,\,\,\,\prod_{i=1}^{m}\mathfrak{p}_{i}^{n_{i}}\mapsto \prod_{i=1}^{m}\left(\frac{L/K}{\mathfrak{p}_{i}}\right)^{n_{i}}
$$
where $\left(\frac{L/K}{\mathbb{p}_{i}}\right)$ is a Frobenius map corresponds to prime ideal $\mathfrak{p}$. Today, I learned an adelic version of (global) class field theory, which is
$$
\mathbb{A}^{\times}_{F}/\overline{F^{\times}(F_{\infty}^{\times})^{o}}
\simeq G_{F}^{ab}$$
where $F$ is number field, $\mathbb{A}_{F}$ is adele over $F$ and $G_{F}^{ab}=Gal(F^{ab}/F)$.
I cannot understand how these two are connected. Could anyone can explain explicit relation between these two things?
REPLY [3 votes]: For more clarity, let us make more precise the definition of the Artin reciprocity map :
1) Over $\mathbf Q$, CFT is the Kronecker-Weber theorem, which says that any finite abelian $L/\mathbf Q$ is contained in a cyclotomic field $\mathbf Q_m = \mathbf Q(\zeta_m)$. Such an $m$ is called a defining modulus for $L/\mathbf Q$ and the conductor $f_L$ of $L/\mathbf Q$ is the smallest (w.r.t. division) defining modulus of $L$. Given a defining modulus $m$ of $L$, set $C_m=(\mathbf Z/m\mathbf Z)^*$, and for $a\in C_m$, define the Artin symbol ($a,L/\mathbf Q$) to be the automorphism of $L$ sending $\zeta_m$ to $\zeta_m^{a}$ , and denote by $I_{L,m}$ its kernel, so as to get an isomorphism $ C_m/I_{L,m} \cong Gal(L/\mathbf Q)$ via the Artin symbol.
2) In classical CFT over a number field $K$, the previous notions can be generalized, but in a very non obvious way. Define a $K$-modulus $\mathfrak M$ to be the formal product of an ideal of the ring of integers $A_K$ and some infinite primes of $K$ (implicitly raised to the first power). In the sequel, for simplification, we'll "speak as if" $\mathfrak M$ was an ideal. Denote by $A_{\mathfrak M}$ the group of fractional prime to $\mathfrak M$ and by $R_{\mathfrak M}$ the subgroup of principal fractional ideals $(x)$ s.t. $x$ is "congruent to" $1$ mod $\mathfrak M$ , and put $C_{\mathfrak M}=A_{\mathfrak M}/R_{\mathfrak M}$. For a finite abelian extension $L/K$, define $I_{L/K,\mathfrak M}=N(C_{L,\mathfrak M})$ , where $N_{L/K}$ is the norm of $L/K$ . A defining $K$-modulus of $L/K$ is s.t. $(C_{\mathfrak M}:I_{L/K,\mathfrak M})=[L:K]$, and the conductor $f_{L/K}$ is the "smallest" defining $K$-modulus of $L/K$. For a finite $K$-prime $\mathfrak P$, coprime with $\mathfrak M$, it can be shown that there exists an unique Artin symbol $(\mathfrak P , L/K) \in G(L/K)$ characterized by $(\mathfrak P, L/K)(x)\equiv x^{N\mathfrak P}$ mod $\mathfrak PA_L$ for any $x\in A_L$, with $N=N_{K/\mathbf Q}$. This definition can be extended multiplicatively to $C_{\mathfrak M}$, and the Artin reciprocity law is the isomorphism $C_{\mathfrak M}/I_{L/K,\mathfrak M} \cong G(L/K)$ via the Artin symbol.
3) In idelic CFT over a number field $K$, the previous $C_{\mathfrak M}$ 's are replaced by idèle class groups. The idèle group $J_K$ is the group of invertible elements of the adèle ring of $K$ (equipped with the "restricted product topology") and the idèle class group $C_K$ is the quotient $J_K/K^*$ . Write $C'_K=C_K/D_K$ , where $D_K$ = the connected component of identity = the subgroup of infinitely divisible elements of $C_K$. For a $K$-modulus ${\mathfrak M}$, let $I_{\mathfrak M} = J_{\mathfrak M}.K^*/K^*$, where $J_{\mathfrak M}$ is the subgroup of idèles which are "congruent" to 1 mod $\mathfrak M$. Given an abelian $L/K$, a defining $K$-modulus $\mathfrak M$ is such that $I_{\mathfrak M}$ is contained in $N_{L/K}C_L$. The Artin global reciprocity map $(.,L/K)$ is defined as follows : by the Chinese Remainder theorem, for any $j \in J_K$, there exists $x \in K^*$ s.t. $j$ is "congruent to" $x$ mod ${\mathfrak M}$; then define $(j, L/K)$ to be the product of the elements $(L/K, \mathfrak P)^{n_\mathfrak P}$ , where $n_\mathfrak P = ord (jx^{-1})_\mathfrak P$, for all $\mathfrak P$ coprime to $\mathfrak M$. It is easy to see that this can be "passed to the quotient" to define a map $(., L/K) : C'_K \to G(L/K)$ s.t. $C'_K/N_{L/K}C'_L \cong G(L/K)$ . This is the Artin reciprocity law in idelic terms. Now that we are rid of the cumbersome modulii $\mathfrak M$, we can take projective limits along the finite abelian extensions of $K$ to get a canonical isomorphism $C'_K \cong G(K^{ab}/K)$, which you can check to coincide with the (rather unexploitable) expression that you gave.
Needless to say, almost all the properties explained above are very elaborate and difficult theorems.<|endoftext|>
TITLE: Symplectic but not Hamiltonian Vector Fields
QUESTION [8 upvotes]: In symplectic geometry, given a manifold $M$ with closed nondegenerate symplectic 2-form $\omega$, it is known that a vector field $X$ is Hamiltonian if $$\iota_X\omega=dH$$ for some smooth function $H\in C^\infty(M)$. A vector field is symplectic if it preserves the symplectic structure along the flow, i.e. $$\mathcal L_X\omega=0\,.$$
One of the easiest ways to check them is to note that if $X$ is symplectic then $\iota_X\omega$ is closed, and if $X$ is Hamiltonian then $\iota_X\omega$ is exact. Consequently, all Hamiltonian vector fields are symplectic but the converse is not true. Locally, however, Poincare lemma guarantees that every symplectic vector field is Hamiltonian.
Now consider symplectic 2-torus $(\mathbb T^2,d\theta\wedge d\phi)$ and a vector field $$X=\frac{\partial}{\partial \theta}\,.$$
Using the first de Rham cohomology, one usually concludes that $X$ is not Hamiltonian. However, I am unsure why: note that $\iota_X\omega=d\phi$, so it looks to me this is exact. Of course, we see that $\phi$ is not globally defined on $\mathbb T^2$, so perhaps this is not correct. But this argument would seem to imply that for symplectic 2-sphere $(S^2,d\theta\wedge d\phi)$, $X$ is not Hamiltonian (even though it should be, since it is symplectic and $$H^1_{\text{de Rham}}(S^2)=0\,.$$
Another example: Consider symplectic 2-sphere $(S^2,d\theta\wedge dh$), where $H(\theta,h)=h$ is a height function. In this case, the same vector field $X$ is Hamiltonian, since we obtained the required smooth Hamiltonian function $H$. Now I reverse the problem: consider another 2-torus $(\mathbb T^2,d\theta\wedge dh)$ and the same vector field $X$. Now it looks like $X$ is Hamiltonian, even though we know
$$H^1_{\text{de Rham}}(\mathbb T^2)\neq0\,.$$
From my (naive) understanding, $H^1_{\text{de Rham}}(M)$ should be the only obstruction for a symplectic vector field to be Hamiltonian vector field, and not on the choice of the symplectic 2-form.
Question: What has gone wrong here? For the first example, for instance, it may have to do with seeing that $d\phi$ is not exterior derivative of $\phi$, which I may have misunderstood.
REPLY [9 votes]: For the first problem, you have already detected where the problem lies: the variable $\phi$ is not a function defined on the whole manifold. Indeed, it is a priori a function in a chart on the manifold and a chart usually does not cover by itself the whole manifold.
On the other hand, the particular case of the torus is special because we can more or less canonically 'parametrize' the torus by $\mathbb{R}^2$ (which is its universal cover), for instance via the map $q: \mathbb{R}^2 \to \mathbb{R}^2 / \mathbb{Z}^2 \cong T^2$. As $\phi$ can be chosen to be one of two cartesian coordinates on $\mathbb{R}^2$, its derivative $d\phi$ (on the plane) is left invariant by any translation, in particular the ones by vectors in $\mathbb{Z}^2$. As such, $d\phi$ 'passes to the quotient' i.e. there exists a well-defined closed 1-form $\eta$ on $T^2$ such that $q^{\ast}\eta = d\phi$. This is another motivation to write $\eta = d\phi$, but note that $\phi$ itself would be a multi-valued function on the torus (and hence not a genuine function, so we wouldn't consider it as an antiderivative to $\eta$).
On the sphere, any chart misses at least one point, so again it is not surprising that one can find an antiderivative to a closed 1-form inside this chart. But if you can't extend $\phi$ and $\theta$ to the whole sphere, it is not clear how you can extend their derivatives to globally closed 1-forms in the first place: your problem possibly does not show up. Besides, the fact that the vector field $X = \partial/\partial \theta$ on the sphere can be globally defined (by rotation invariance and also by the null vectors at the poles) is not related to the (im)possibility that $\theta$ (or $d\theta$) is globally well-defined, but only to the fact that $X \lrcorner \omega$ is a closed (and exact) 1-form : an antiderivative is the height function, which is clearly not the angle 'function' $\theta$.
The obstruction to a symplectic vector field $X$ to be Hamiltonian is precisely whether the closed 1-form $X \lrcorner \omega$ is exact. In other words, does the cohomology class $[X \lrcorner \omega] \in H^1_{dR}(M; \mathbb{R})$ vanish? (The nonvanishing of this class is the obstruction to $X$ being Hamiltonian.) This question makes sense on any manifold; the point is that when $H^1_{dR}(M; \mathbb{R})=0$, then the answer is 'yes' whatever the symplectic field $X$. So on the 2-sphere, any symplectic vector field is Hamiltonian, whereas on the torus it depends on the symplectic vector field considered. Put differently, the (non)vanishing of the 1-cohomology group is the obstruction to the equality $Symp(M, \omega) = Ham(M, \omega)$.<|endoftext|>
TITLE: Reconstructing a functional from its Euler-Lagrange equations
QUESTION [5 upvotes]: Is it true that Euler-Lagrange equations associated to a functional determine the functional?
Suppose I give you an equation and I claim that it is an Euler-Lagrange equation of some functional. Can you tell me what was the functional?
Of course, there is always more than one functional whith prescribed E-L equations, since the critical points of $E$ and of $\phi(E)$ where $\phi:\mathbb{R} \to \mathbb{R}$ is smooth and stirclty monotonic are identical.
(By the chain rule $ (\phi \circ f)'(x)=\phi'(f(x))\cdot f'(x)$).
Is it true that there is a functional $E$ whose E-L equations are the
prescribed ones, and every other functional with the same E-L equations is a function of $E$?
One can think on different ways to formalize this question like different choices for the domain of the functional: paths in a manifold, real valued functions on $\mathbb{R}^n$, mappings between Riemannian manifolds etc, but at this stage of the game I don't want to choose a specific form yet. (Although I am particularly interested in the latter case).
REPLY [5 votes]: It is well-known that
adding boundary/total divergence terms and/or
overall scaling
of a functional preserve the Euler-Lagrange (EL) equations.
On the other hand, there is in general no classification of possible functionals that lead to a given set of EL equations.
An instructive example from Newtonian point mechanics is given in this Phys.SE post, where two Lagrangians $L=T-V$ and $L=\frac{1}{3}T^2+2TV-V^2$ both have Newton's second law as their EL equation.
Another example: The functional in this Math.SE post has the same EL equation as the functional $F[y]=\int_0^3 \! \mathrm{d}x~y^{\prime 2}.$<|endoftext|>
TITLE: Algebraic extension of $\Bbb Q$ with exactly one extension of given degree $n$
QUESTION [13 upvotes]: Let $n \geq 2$ be any integer. Is there an algebraic extension $F_n$ of $\Bbb Q$ such that $F_n$ has exactly one field extension $K/F_n$ of degree $n$?
Here I mean "exactly one" in a strict sense, i.e. I don't allow "up to (field / $F$-algebra) isomorphisms". But a solution with "exactly one up to field (or $F$-algebra) isomorphisms" would also be welcome.
I'm very interested in the case where $n$ has two distinct prime factors.
My thoughts:
This answer provides a construction for $n=2$. I was able to generalize it for $n=p^r$ where $p$ is an odd prime.
Let
$S = \left\{\zeta_{p^r}^j\sqrt[p^r]{2} \mid 0 \leq j < p^r \right\}$.
Then
$$\mathscr F_S =
\left\{L/\Bbb Q \text{ algebraic extension} \mid \forall x \in S,\; x \not \in L \text{ and } \zeta_{p^r} \in L \right\}
=\left\{L/\Bbb Q \text{ algebraic extension} \mid \sqrt[p^r]{2} \not \in L \text{ and } \zeta_{p^r} \in L \right\}
$$ has a maximal element $F$, by Zorn's lemma.
In particular, we have
$$
F \subsetneq K \text{ and } K/\Bbb Q \text{ algebraic extension}
\implies
\exists x \in S,\; x \in K \implies \exists x \in S,\; F \subsetneq F(x) \subseteq K
$$
But $X^{p^r}-2$ is the minimal polynomial of any $x \in S$ over $F$ : it is irreducible over $F$ because $2$ is not a $p$-th power in $F$.
Therefore $F(x)$ has degree $p^r$ over $F$ and using the implications above, we conclude that $F(x) = F(\sqrt[p^r]{2})$ is the only extension of degree $p^r$ of $F$, when $x \in S$.
Assume now that we want to build a field $F$ with the desired property for some $n=\prod_{i=1}^r p_i^{n_i}$. I tried to do some kind of compositum, without any success.
I have some trouble with the irreducibility over $F$ of the minimal polynomial of some $x \in S$ ($S$ suitably chosen) over $\Bbb Q$...
I know that $\mathbf C((t))$ is quasi-finite and embeds abstractly in $\bf C$, so there is an uncountable subfield of $\bf C$ having exactly one field extension of degree $n$ for any $n \geq 1$.
REPLY [4 votes]: If you choose a random $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ and consider the fixed field $K\subset \bar{\mathbb{Q}}$ of $\sigma$, then $\bar{\mathbb{Q}}/K$ will be Galois. The Galois group $G$ will almost always be $\hat{\mathbb{Z}}$, the profinite completion of the integers, and this is a group that has exactly one finite index subgroup of each index. Then $K$ will solve your problem for each $n$. There will be infinitely many such fields.
The problem is that it is hard to write down any concrete element of the absolute Galois group (except complex conjugation, as lulu referred to), and then also hard to write down its fixed field. So I'm afraid this answer is very nonconstructive.
See here for details:
https://mathoverflow.net/questions/273224/what-is-the-probability-of-generating-a-given-procyclic-subgroup-in-mathrmgal<|endoftext|>
TITLE: Prove that $\sqrt{2}+\sqrt{3}+\sqrt{5}$ is irrational. Generalise this.
QUESTION [10 upvotes]: I'm reading R. Courant & H. Robbins' "What is Mathematics: An Elementary Approach to Ideas and Methods" for fun. I'm on page $60$ and $61$ of the second addition. There are three exercises on proving numbers irrational spanning these pages, the last is as follows.
Exercise $3$: Prove that $\phi=\sqrt{2}+\sqrt{3}+\sqrt{5}$ is irrational. Try to make up similar and more general examples.
My Attempt:
Lemma: The number $\sqrt{2}+\sqrt{3}$ is irrational. (This is part of Exercise 2.)
Proof: Suppose $\sqrt{2}+\sqrt{3}=r$ is rational. Then
$$\begin{align}
2&=(r-\sqrt{3})^2 \\
&=r^2-2r\sqrt{3}+3
\end{align}$$ is rational, so that
$$\sqrt{3}=\frac{r^2+1}{2r}$$ is rational, a contradiction. $\square$
Let $\psi=\sqrt{2}+\sqrt{3}$. Then, considering $\phi$,
$$\begin{align}
5&=(\phi-\psi)^2 \\
&=\phi^2-2\psi\phi+5+2\sqrt{6}.
\end{align}$$
I don't know what else to do from here.
My plan is/was to use the Lemma above as the focus for a contradiction, showing $\psi$ is rational somehow.
Please help :)
Thoughts:
The "try to make up similar and more general examples" bit is a little vague.
The question is not answered here as far as I can tell.
REPLY [3 votes]: An alternative solution. Assume that $\alpha=\sqrt{2}+\sqrt{3}+\sqrt{5}=\frac{a}{b}\in\mathbb{Q}$. By quadratic reciprocity, there is some prime $p>b$ such that $3$ and $5$ are quadratic residues $\!\!\pmod{p}$ while $2$ is not. That implies that $\alpha$ is an algebraic number over $\mathbb{K}=\mathbb{F}_p$ with degree $2$, since $\sqrt{2}$ does not belong to $\mathbb{K}$ but belongs to a quadratic extension of $\mathbb{K}$. On the other hand $b
TITLE: Distribution of interarrival times in a Poisson process
QUESTION [5 upvotes]: I am new to Statistics. I am studying Poisson process, I have certain questions to ask.
A process of arrival times in continuous time is called a Poisson process of rate $\lambda$ if the following two conditions hold:
The number of arrivals in an interval of length $t$ is $\text{Pois}(\lambda t)$ random variable.
The number of arrivals that occur in disjoint time intervals are independent of each other.
Let $X_1$ denote the time of first arrival in a Poisson process of rate $\lambda$. Let $X_2$ denote the time elapsed between the first arrival and the second arrival. We can find the distribution of $X_1$ as follows:
$$\mathbb{P}(X_1>t)=\mathbb{P}\left(\text{No arrivals in }[0,t]\right)=\mathrm{e}^{-\lambda t}$$
Thus $\mathbb{P}(X_1\le t)=1-\mathrm{e}^{-\lambda t}$, and hence $X_1\sim\text{Expo}(\lambda)$.
Suppose we want to find the conditional distribution of $X_2$ given $X_1$. I found the following discussion in my textbook.
$\begin{equation}\begin{split}\mathbb{P}(X_2>t\mid X_1=s) & = \mathbb{P}\left(\text{No arrivals in }(s,s+t] \mid \text{Exactly one arrival in [0,s]} \right) \\
& =\mathbb{P}\left(\text{No arrivals in }(s,s+t]\right)\\
&=\mathrm{e}^{-\lambda t}\end{split}\end{equation}$.
Thus, $X_1$ and $X_2$ are independent, and $X_2\sim\text{Expo}(\lambda)$.
However, I have the following questions regarding the above discussion.
Since $X_1$ is a continuous random variable, $\mathbb{P}(X_1=k)=0$ for every $k\in\mathbb{R}$. Thus, $\mathbb{P}(X_1=s)=0$. In other words, we are conditioning on an event with zero probability. But when I studied conditional probability, conditioning on events with zero probability was not defined. So in this case, is conditioning on an event with zero probability valid?
Second, assuming that conditioning on $X_1=s$ is valid, what we have found is the conditional distribution of $X_2$ given $X_1=s$. In other words, the conditional distribution of $X_2$ given $X_1$ is $\text{Expo}(\lambda)$, not the distribution of $X_2$ itself. But the author claims that $X_2\sim\text{Expo}(\lambda)$. Why is this true?
REPLY [3 votes]: If the conditional distribution of $X_2$ given the event $X_1=s$ is the same for all values of $s$, then the marginal (i.e. not conditional) distribution of $X_2$ is also that same distribution, and they are independent.
If the conditional distribution of $X_2$ given $X_1=s$ depended on $s$, then the distribution of $X_2$ would be a weighted average of those conditional distributions, with weights given by the distribution of $X_1$. But if all of those conditional distributions are the same, then you're taking a weighted average of things that are all the same.
How to define conditioning on an event of probability $0$ is somewhat more delicate; maybe I'll say more about that later.<|endoftext|>
TITLE: Is the Lie Algebra of a connected abelian group abelian?
QUESTION [10 upvotes]: Is the Lie Algebra of a connected abelian group abelian? I guess that this should be true, but how do you prove it?
REPLY [26 votes]: Yes, and connectedness is not necessary. I know three proofs:
Proof 1
When $G$ is abelian, the inverse map
$$i:G\to G,\quad g\mapsto g^{-1}$$
is a group homomorphism. Hence, its differential at $1\in G$
$$di_1:\mathfrak{g}\to\mathfrak{g},\quad X\mapsto -X$$
is a Lie algebra homomorphism. But then
$$-[X,Y]=di_1([X,Y])=[di_1(X),di_1(Y)]=[-X,-Y]=[X,Y],$$
so $[X,Y]=0$.
Proof 2
For any Lie group $G$, the differential at $1$ of the map $\mathrm{Ad}:G\to GL(\mathfrak{g})$ is $\mathrm{ad}:\mathfrak{g}\to\mathrm{End}(\mathfrak{g})$ where $\mathrm{ad}(X)(Y)=[X,Y]$. But when $G$ is abelian, $\mathrm{Ad}$ is the constant map to the identity (since $\mathrm{Ad}(g)$ is the differential of the map $G\to G,a\mapsto gag^{-1}$ which is the identity when $G$ is abelian), so $\mathrm{ad}=0$.
Proof 3
For any Lie group $G$ we have that for $X,Y\in\mathfrak{g}$,
$$\exp(sX)\exp(tY)=\exp(tY)\exp(sX),\forall s,t\in\mathbb{R}\quad\iff[X,Y]=0.$$
If $G$ is abelian, the left-hand side always hold, so $[X,Y]=0$ for all $X,Y\in\mathfrak{g}$.
Remark about the converse
The last proof can be used to prove the converse when $G$ is connected. This is because $\exp$ restricts to a diffeomorphism from a neighborhood of $0$ in $\mathfrak{g}$ to a neighborhood of the identity in $G$ and a connected group is generated by any neighborhood of the identity.
However, connectedness is necessary for the converse. For example, if $T$ is any abelian connected Lie group and $H$ is any non-abelian finite group, then $G=T\times H$ is a non-abelian Lie group with abelian Lie algebra.<|endoftext|>
TITLE: How do we prove a set axioms never lead to a contradiction?
QUESTION [8 upvotes]: How can we be sure that a set of axioms will never lead to a contradiction? If there's a contradiction, we will find it first or later. But if there's no one, how can we be sure we choosen reasonably the axioms so that no contradiction will ever arise?
Is there a general approach or for every known axiom set there was a specific proof? (In example, there exist such proof for Peano's Axioms)?
REPLY [10 votes]: Proofs presuppose axioms. In order to prove that "$T$ is consistent," we need to work within some other axiom system $S$; this, then means that our proof is only as convincing as our belief in the consistency of $S$. Note that even without Goedel's incompleteness theorem, we shouldn't be convinced by $S$ proving "I am consistent" - of course it would if it were inconsistent! So I actually think Goedel is a red herring, here.
That said, this doesn't kill the project of proving consistency, it just changes it. In order to prove that a theory $T$ is consistent, we want to find some theory $S$ for which we have good reason to believe that it is consistent, and then prove inside $S$ that $T$ is consistent. One standard example of this is ordinal analysis: the goal is to assign a linear order $\alpha_T$ to $T$ which is "clearly" well-ordered, and then show that the very weak theory PRA, together with "$\alpha_T$ is well-ordered", proves that $T$ is consistent (I'm skipping many details here). For $T=PA$, for instance, this was done by Gentzen; the relevant ordering is the ordinal $\epsilon_0$. This is, however, of dubious use for convincing us of the consistency of theories: for weak theories like $PA$, I find the consistency of $PA$ more "obviously true" than the well-orderedness of $\epsilon_0$, and for stronger theories the relevant $\alpha_T$s are incredibly complicated to describe.
EDIT: Symplectomorphic asked about the model-theoretic answer: we know a theory is consistent if we can exhibit a model. I did omit this above, so let me address it now. What I want to convince you of is that this is a bit more complicated than it sounds. I claim that - even if you have a model of your theory in hand - you're still going to need to do some work to convince me of the consistency of your theory, and ultimately my first paragraph above is still going to be relevant.
So suppose you have a theory $T$ you're trying to convince me is consistent, and you have a model $\mathcal{M}$ of $T$ "in hand" (whatever that means). What do you need to persuade me about?
First, you have to prove that having a model means your theory is consistent. This sounds trivial, but it's really a fact about our proof system - soundness. It's an extremely basic fact, but technically something that requires proof.
Second, when we exhibit a model, what we're really doing is describing a mathematical object. Well, you need to prove to me that it exists. There are really complicated mathematical objects out there, and theories we believe to be consistent which provably have no "simple" models (like ZFC), so this really isn't a silly objection in general.
Finally, even if I'm convinced that our logic is sound, and that the structure you've described for me exists, you need to convince me that it is in fact a model of your theory! And the more complicated your theory is, the more complicated your model will be, and hence the more difficult this task will be. In fact, this is super hard in general: is $(\mathbb{N}; +, \times)$ a model of the sentence, "There are infinitely many twin primes"? How about "ZFC is consistent"?
Now, the first obstacle is a rather silly one - I think it's fine to take the soundness of logic for granted. But the second and third aren't so trivial (and even the first isn't really completely trivial). What I'm saying is, there's no way to ground a claim of consistency as solidly as a claim of inconsistency. To show a theory is inconsistent, you exhibit a proof of a contradiction; and then I'm completely convinced. To show that a theory is consistent by exhibiting a model, you need to build a model and verify that it satisfies the theory, and each of those steps implicitly takes place in a background theory whose consistency I could in principle question.<|endoftext|>
TITLE: Two inequalities involving the rearrangement inequality
QUESTION [6 upvotes]: Well, there are two more inequalities I'm struggling to prove using the Rearrangement Inequality (for $a,b,c>0$):
$$
a^4b^2+a^4c^2+b^4a^2+b^4c^2+c^4a^2+c^4b^2\ge 6a^2b^2c^2
$$
and
$$a^2b+ab^2+b^2c+bc^2+ac^2+a^2c\ge 6abc
$$
They seems somewhat similar, so I hope there'a an exploitable link between them. They fall easily under Muirhead, yet I cannot figure out how to prove them using the Rearrangement Inequality.
Any hints greatly appreciated.
REPLY [2 votes]: \begin{eqnarray*}
a(b-c)^2+b(c-a)^2+c(a-b)^2 \geq 0
\end{eqnarray*}
and the second inequality follows. Now substitute $a^2$ for $a$ etc and the first inequality follows.<|endoftext|>
TITLE: Understanding impredicative definitions
QUESTION [7 upvotes]: In studying more on the mathematics in the past of Frege, Russell, and Zermelo, and I was wanting to learn more about impredicative/predicative definitions to solve some inquiries I had.
1. How does banning impredicative definitions avoid Russell's Paradox?
From what I read, The Vicious Circle Principle played a role where "No entity can be defined in terms of a totality to which this entity belongs". From this, I can see that this does indeed ban the definition of Impredicativity. Is there more to this that I'm missing?
2. Does ZFC allow impredicative definitions? If it does, how does it avoid Russell's Paradox?
Zermelo and Fraenkel, developed the ZFC and they did allow impredicative definitions as they did not allow the existence of universal sets and only referred to Pure Sets/proper classes and prevents its model from containing elements of sets that are not themselves sets. Were there other factors that ZFC had to avoid Russell's paradox?
Thanks for reading & helping!
REPLY [7 votes]: (1) You are right, with caveats. The caveat is that "impredicative" is an intuition that Russell tried to pin the blame for the paradox on -- and then he spent reams of words and many years trying to define what exactly "impredicative" means, such that banning it would both avoid the paradoxes and still allow ordinary mathematics. The results were not exactly successful -- at least they didn't catch on.
(2a) Yes, ZFC allows impredicative definitions. For example let's define
A natural number $n$ is called hooplish if every subset $A$ of $\mathbb N$ with the property that every prime power is a sum of at most $n$ elements of $A$ must contain an arithmetic sequence of length $n$.
(The details of this don't matter -- in fact, I have no idea which numbers are or are not hooplish, or whether the concept is trivial or not). What matters is that "$n$ is hooplish" can certainly by defined by a formula in the language of set theory, and therefore ZFC's Axiom of Separation allows us to define
$$ H = \{ n\in\mathbb N \mid n\text{ is hooplish} \} $$
According to this definition, in order to figure out whether some number is in $H$, we need to quantify over all subsets of $\mathbb N$, including $H$ itself. That is by every reasonable standard impredicative! But ZFC has no problem with it; it promises us that there is a set with which property.
And nobody has, so far, been able to leverage that guarantee into an proof of a contradiction.
The philosophical underpinning of this is the view that the Axiom of Separation does not generate the subsets of $\mathbb N$ -- in the intended interpretation they are all there from the beginning, and the axiom just explains that we can pick one of them in such-and-such way.
(2b) ZFC avoids Russell's paradox by not having an axiom that guarantee that $\{x\mid x\notin x\}$ describes a set. ZFC doesn't say that the problem with the definition is that it is "impredicative", but simply that it doesn't fit into any of the precisely enumerated kinds of definitions that ZFC does allow.
Russell thought that banning impredicative definitions would be one way to avoid the paradoxes while preserving ordinary mathematics. Just because he said so, however, doesn't mean that he was right -- opinions seem to be divided whether in his quest to preserve ordinary mathematics he didn't, effectively, open a back door to at least some kind of impredicative definitions.
And in any case, I don't think Russell claimed such a ban would be the only way to reach the goal (though he evidently was of the opinion it would be the best way, if only the details could be gotten right). ZFC simply follows a different strategy, one that seems to be pretty successful so far.<|endoftext|>
TITLE: Find the sum of $\binom{2016}{4} + \binom{2016}{8} +\binom{2016}{12} + \dots + \binom{2016}{2016}$
QUESTION [10 upvotes]: Problem:
Find
$$\dbinom{2016}{4} + \dbinom{2016}{8} +\dbinom{2016}{12} + \dots + \dbinom{2016}{2016}$$
I don't know how to attempt this problem, other than that this sum is equivalent to finding the sum of the coefficients of degree 4 terms in the polynomial,
$$P(x) = (x+1)^{2016}$$
**I know that there's a duplicate of this problem somewhere, but I just can't find it on the website. Any help is appreciated!
REPLY [5 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\sum_{n = 1}^{504}{2016 \choose 4n} & =
-1 + \sum_{n = 0}^{2016}{2016 \choose n}
{1 + \pars{-1}^{n} + \ic^{n} + \pars{-\ic}^{n} \over 4}
\\[5mm] & =
-1 + {1 \over 4}\sum_{n = 0}^{2016}{2016 \choose n} +
{1 \over 4}\sum_{n = 0}^{2016}{2016 \choose n}\pars{-1}^{n}
\\[2mm] & +
{1 \over 2}\,\Re\sum_{n = 0}^{2016}{2016 \choose n}\ic^{n}
\\[5mm] & =
-1 + {1 \over 4}\,\pars{1 + 1}^{2016}+ {1 \over 4}\,\pars{1 - 1}^{2016} +
{1 \over 2}\,\Re\pars{1 + \ic}^{2016}
\\[5mm] & =
-1 + 2^{2014} + {1 \over 2}\,\Re\pars{2^{1008}\expo{504\pi\ic}} =
\bbx{\ds{-1 + 2^{2014} + 2^{1007}}}
\end{align}<|endoftext|>
TITLE: Show that $\int_{-\infty}^{\infty}\frac{\cos(x)}{(x^2+1)^2} dx = \frac{\pi}{e}$ using complex analysis
QUESTION [5 upvotes]: I am trying to show that $\int_{-\infty}^{\infty}\frac{\cos(x)}{(x^2+1)^2} dx = \frac{\pi}{e}$ by considering integration around a rectangle in the upper half complex plane and substitute $z = x$. But I am not sure how to proceed from here. Any help is appreciated.
REPLY [4 votes]: I wouldn't integrate along a rectangle, but rather along a semicircle running along the real axis and then being closed in the upper half plane.
Since $\sin x$ is odd, we have:
\begin{align}
\int_{-\infty}^\infty\frac{\cos(x)}{(x^2+1)^2}dx=\int_{-\infty}^\infty\frac{\cos(x)+i\sin(x)}{(x^2+1)^2}dx=\int_{-\infty}^\infty\frac{e^{ix}}{(x^2+1)^2}dx\\
\end{align}
Consider the contour integral, where the contour $\Gamma$ runs along the real axis from $-R$ to $R$ and is then closed in the upper half plane, which we can split into two parts:
$$\oint_\Gamma\frac{e^{iz}}{(z^2+1)^2}dz=\int_{-R}^R\frac{e^{ix}}{(x^2+1)^2}dx+\int_\text{Arc}\frac{e^{iz}}{(z^2+1)^2}dz$$
However, by the Residue Theorem, we have:
\begin{align}
\oint_\Gamma\frac{e^{iz}}{(z^2+1)^2}dz &= 2\pi i \text{ Res}\left(\frac{e^{iz}}{(z^2+1)^2},i\right)\\
&=2\pi i\lim\limits_{z\rightarrow i}\frac{d}{dz}\frac{e^{iz}}{(z+i)^2}\\
&=2\pi i\lim\limits_{z\rightarrow i}\frac{ie^{iz}(z+3i)}{(z+i)^3}\\
&=2\pi i\frac{ie^{-1}\cdot4i}{(2i)^3}\\
&=\frac{\pi}{e}
\end{align}
Thus we have:
$$\frac{\pi}{e}=\int_{-R}^R\frac{e^{ix}}{(x^2+1)^2}dx+\int_\text{Arc}\frac{e^{iz}}{(z^2+1)^2}dz$$
Taking the limit as $R\rightarrow\infty$, the integral along the arc of the semi-circle vanishes, because the positive imaginary part of $z$ leads to an exponential damping factor in the upper half plane.
Thus, we are left with our desired result:
$$\int_{-\infty}^\infty\frac{\cos(x)}{(x^2+1)^2}dx = \frac{\pi}{e}$$<|endoftext|>
TITLE: Prove that for any even positive integer $n$, $n^2-1 \mid 2^{n!}-1$
QUESTION [7 upvotes]: Prove that for any even positive integer $n$, $n^2-1 \mid 2^{n!}-1$
This is from a book. They have given the proof. But I didn't understand it well. I am looking for a simpler proof. Or it will be helpful if someone explain this a little bit more -
Proof: Let $m = n+1$ then we need to prove that $m(m-2) \mid 2^{(m-1)!}-1$. Because of $\phi(m) \mid (m-1)!$, we have $2^{\phi(m)} -1 \mid 2^{(m-1)!} -1$. And from Euler's theorem, $(m-2) \mid 2^{(m-1)!}-1$. Because $m$ is odd, $gcd(m,m-2)=1$ and the conclusion follows.
REPLY [2 votes]: $2\,$ is coprime to $\,n\!+\!1\,$ so $\,{\rm mod}\ n\!+\!1\!:\, $ $\,2$ has order $\le n,\,$ i.e. $\,\color{#c00}{2^{\large k}\! \equiv 1}\,$ for $\,k\le n,\,$ thus $\,k\mid n!\:$ hence $\,2^{\large n!}\!\equiv (\color{#c00}{2^{\large k}})^{\large n!/k}\!\equiv 1.\,$ Similarly $\,2^{\large n!}\!\equiv 1\pmod{\!n\!-\!1}.$ Thus $\,2^{\large n!}\!-1\,$ is divisible by $\,n\!+\!1\,$ and $\,n\!-\!1\,$ hence also by their lcm = product, since $\:\gcd(n\!+\!1,n\!-\!1) = \gcd(n\!+\!1,2) = 1,\,$ by $n$ even.
REPLY [2 votes]: I have a shorter proof, because $n$ is even then $n^2-1$ is odd and $\gcd(n^2-1,2)=1$, thus according to Euler's theorem
$$2^{\varphi(n^2-1)}\equiv 1 \pmod{n^2-1}$$
But totient function is multiplicative and $\gcd(n-1,n+1)=1$ or
$$\varphi(n^2-1)=\varphi(n+1)\cdot \varphi(n-1)\leq n\cdot (n-2)
TITLE: Do we know a transcendental number with a proven bounded continued fraction expansion?
QUESTION [7 upvotes]: The simple continued-fraction-expansion for the transcendental number $e$ is known to be unbounded. What about bounded continued fractions ?
Do we know any transcendental number for which it is proven that the simple continued-fraction-expansion is bounded ?
It is conjectured that the simple continued-fraction-expansion of the algebraic numbers with minimal polynomial degree greater than $2$ are unbounded.
If this would be true, every bounded non-periodic infinite simple continued-fraction-expansion would correspond with a transcendental number.
But to my knowledge, it was not proven for a single algebraic number with minimal polynomial degree greater than $2$, that its simple continued-fraction-expansion is unbounded.
REPLY [11 votes]: Do we know any transcendental number for which it is proven that the simple continued-fraction-expansion is bounded?
Here's one for you:
$\begin{align}
K &= \sum^\infty_{n=0}10^{-2^{n}} \\
&= 10^{-1}+10^{-2}+10^{-4}+10^{-8}+10^{-16}+10^{-32}+10^{-64}+\ldots \\
&= 0.\mathbf{1}\mathbf{1}0\mathbf{1}000\mathbf{1}0000000\mathbf{1}000000000000000\mathbf{1}0000000000000000000000000000000\mathbf{1}\ldots
\end{align}$
a constant with 1's in positions corresponding to an integer power of two and zeros everywhere else.
K has a canonical continued fraction expansion of:
$\left[0; 9, 12, 10, 10, 8, 10, 12, 10, 8, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, 12, 8, 10, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, 12, 10, 8, 12, 10, 8, 10, 10, 12, 8, 10, 12, 10, 10, 8, 10, 12, 8, 10, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, 12, 10, 8, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, \ldots\right]$
After calculating the first 1000000 terms on Wolfram Cloud, I'm fairly certain that (except for the first term which is 0 and the second term which is 9) all of the terms are 8, 10, or 12. (Maybe someone can prove this)
Looking at the terms themselves, the position numbers of the 12's seem to all be congruent to 2 or 7 mod 8, and even after 10000 terms there seems to be nothing special as to their ordering. And the positions of the eights (5, 9, 12, 17, 21, 24, ...) are all congruent to 1 or 0 mod 4. But it seems that there is a particular order as to which of the positions are which. I was also able to use Wolfram Alpha to find a function that was able to correctly evaluate the positions of all the 8's for the first 10000 terms. And after unsuccessfully trying to find a formula for the 10's, here is what the structure of the continued fraction appears to look like:
$K=a_0+\frac{1}{a_1+\frac{1}{a_2+\frac{1}{a_3+\frac{1}{a_4+\frac{!}{a_5+\ldots}}}}}$
where
$\forall~n\in\mathbb{Z}_{\geqslant 0},~a_n=\begin{cases}
0 & n=0 \\
8 & n\in\left\{\frac{8m+\left(\frac{-1}{m-1}\right)+1}{2}~:~m\in\mathbb{Z}^{+}\right\} \\
9 & n=1 \\
10 & \text{otherwise} \\
12 & n\equiv 2\left(\operatorname{mod}8\right)\text{or}~7\left(\operatorname{mod}8\right)
\end{cases}$
where $\left(\frac{n}{m}\right)$ is the Jacobi symbol.
So there we have it. A transcendental number whose continued fraction has bounded terms.<|endoftext|>
TITLE: How does the derivative with respect to the complex conjugate even make sense?
QUESTION [6 upvotes]: I came across this the other day:
$$
\frac{\partial f}{\partial \bar{z}} = \frac12\left(\frac{\partial f}{\partial x}+i\frac{\partial f}{\partial y}\right)
$$
I decided to attempt to work it out myself to better understand it. I know $2x = z + \bar{z}$, and $2iy = z - \bar{z}$, and using the total derivative we have
$$
\frac{\partial f}{\partial \bar{z}} = \frac{\partial f}{\partial x}\frac{\partial x}{\partial \bar{z}} + \frac{\partial f}{\partial y}\frac{\partial y}{\partial \bar{z}}
$$
and this is about where I got stuck. How exactly am I supposed to calculate $\frac{\partial x}{\partial \bar{z}}$? My confusion doesn't lie in the notation, but in the mechanics of the very thing I'm being asked to differentiate. Look at $x$:
$$
x(\bar{z}) = \frac{\bar{z} + z}{2} = \frac{\bar{z}+\bar{\bar{z}}}{2}
$$
if we label $Z = \bar{z}$, then $\frac{\partial x}{\partial \bar{z}} = \frac{\partial x}{\partial Z}$, and $x(Z) = \frac{Z+\bar{Z}}2$. However, as far as I can tell, $\frac{Z+\bar{Z}}{2}$ isn't even complex differentiable, because $\bar{Z}$ isn't complex differentiable with respect to $Z$. $x(Z)$ doesn't satisfy the CR equations:
$$
x(X+iY) = X + i0 = u(X, Y) + iv(X, Y) \\
u_X = 1 \neq 0 = v_Y \\
v_X = 0 \neq -1 = -u_Y
$$
so how could I possibly take the complex derivative of it? That doesn't make any sense.
What exactly am I missing here? Is the derivative $\frac{\partial x}{\partial \bar{z}}$ a different kind of derivative? Are we not supposed to do the complex derivative but instead something else?
REPLY [7 votes]: It is a different way of thinking.
Think of a function of two variables. The variables may be $x,y$. But you can write
$$
z = x+iy,\qquad \overline{z} = x - i y
$$
and get two new variables $z, \overline{z}$. You can write $x$ and $y$ in terms of $z$ and $\overline{z}$. You can write $z$ and $\overline{z}$ in terms of $x$ and $y$. Thus, you can do a change of variables. When $f$ is considered a function of $z$ and $\overline{z}$ in this way, then of course the two partial derivatives make sense.
Some day perhaps you will study differential geometry, where you will learn a more complete way of doing this.<|endoftext|>
TITLE: $2^5 \times 9^2 =2592$
QUESTION [14 upvotes]: $$2^5 \times 9^2 =2592$$ I am trying to find any other number in this pattern.
That is find natural numbers $a$ , $b$ , $c$ and $d$ such that
$$a^b \times c^d = \overline{abcd} $$
We have
$$a^b \times c^d = \overline{cd} + 100\overline{ab} $$
So $$a^b \times c^d - \overline{cd} = 100\overline{ab} $$
LHS is a multiple of $100$ .Any help from here will be greatly appreciated
REPLY [11 votes]: The 2592 puzzle apparently originated with Henry Ernest Dudeney's 1917 book Amusements in Mathematics where it is given as puzzle 115, "A PRINTER'S ERROR":
In a certain article a printer had to set up the figures $5^4\times2^3,$ which of course meant that the fourth power of $5\ (625)$ is to be multiplied by the cube of $2\ (8),$ the product of which is $5,000.$ But he printed $5^4\times2^3$ as $5423,$ which is not correct. Can you place four digits in the manner shown, so that it will be equally correct if the printer sets it up aright, or makes the same blunder?
[. . . .]
The answer is that $2^5\times9^2$ is the same as $2592,$ and this is the only possible solution to the puzzle.
It was apparently rediscovered fifteen years later and published in the American Mathematical Monthly, vol. 40, December 1933, p. 607, as problem E69, proposed by Raphael Robinson, in the following form:
Instead of a product of powers, $a^bc^d,$ a printer accidentally prints the four digit number, $abcd.$ The value is however the same. Find the number and show that it is unique.
A solution by C. W. Trigg was published in vol. 41, May 1934, p. 332; the problem was also solved by Florence E. Allen, W. E. Buker, E. P. Starke, Simon Vatriquant, and the proposer.<|endoftext|>
TITLE: What do the two things such that "Data is fixed" and "Parameters vary" in Bayesian statistics mean?
QUESTION [5 upvotes]: While following the bayesian statistics online, the lecturer said that "Data is fixed" and "Parameters vary" in Bayesian statistics. But the explanations I got doesn't really make me understand what those things mean. The two things sound important to begin with the basic idea of Bayesian statistics. Hope to hear explanations.
REPLY [7 votes]: Under a frequentist point of view you might have an unknown parameter, say $\theta$, that you want to estimate based on some data you have collected. You assume that this true and unknown parameter is fixed. Your data are expressed through a random variable, say $X$. So, for example you are interested in maximizing a likelihood based on the probability density function $f(X\mid\theta)$. This means that you model your collected data under a belief that the probability function of your data depends on that unknown parameter. You then can estimate that parameter say by maximizing the likelihood of the data (i.e. you consider the data random, in a sense that there are a random realization of the population that you study).
Under a bayesian point of view things are a bit reversed. You do not view the parameter $\theta$ as an unknown constant, i.e. fixed at some value, that you try to estimate. You rather consider that the parameter itself has a marginal distribution $f(\theta)$ which is called a prior. This expresses you prior beliefs regarding the parameter which now is viewed as a random variable since it follows a distribution. Under such a framework you might be interested in modelling $f(\theta\mid X)$, namely update your knowledge for $\theta$ GIVEN the data you have collected. Since now the data are given, they are not random hence the "data is fixed" that your lecturer mentioned.<|endoftext|>
TITLE: Is there a quicker way to solve this integral: $\int \frac{3-\cos(x)}{(1+2\cos(x))\sin^2(x)}dx$?
QUESTION [9 upvotes]: The integral is:
$$ \int \frac{3-\cos(x)}{(1+2\cos(x))\sin^2(x)}dx$$
This is the way I approached it:
$$ \tan\left(\frac{x}{2}\right)=u\\dx=\frac{2}{\sec^2\left(\frac{x}{2}\right)}du$$
By using trigonometric identities we get:
$$ \sin(x)=\frac{2u}{1+u^2};\ \cos(x)=\frac{1-u^2}{1+u^2};\ \sec^2\left(\frac{x}{2}\right)=1+u^2
$$
Therefore the integral now becomes:
$$ 2\int \frac{3-\frac{1-u^2}{1+u^2}}{\left(1+2\left(\frac{1-u^2}{1+u^2}\right)\right)\left(\frac{2u}{1+u^2}\right)^2(1+u^2)}du=$$ $$\int\frac{(1+2u^2)(1+u^2)}{u^2(3-u^2)}du$$ By dividing the two polynomials we get: $$\int\left(-2-\frac{9u^2+1}{u^2(3-u^2)}\right)du$$ Using partial fractions we get to the simplified form: $$\int\left(-2-\frac{1}{3u^2}+\frac{28}{3(3-u^2)}\right)du$$ $$-2u-\frac{1}{3u}+\frac{28\sqrt3}{9}\int\frac{1}{1-\left(\frac{u}{\sqrt3}\right)^2}du \\ -2u-\frac{1}{3u}+\frac{28\sqrt3\tanh^{-1}{\left(\frac{u}{\sqrt3}\right)}}{9}+C$$
By substituting back in for $u$, we get the solution: $$ \bbox[5px,border:2px solid black]{\frac{28\sqrt3\tanh^{-1}{\left(\frac{\tan\left(\frac{x}{2}\right)}{\sqrt3}\right)}}{9}-2\tan\left(\frac{x}{2}\right)-\frac{1}{3}\cot\left(\frac{x}{2}\right)+C}$$
My question is, as you can understand from the title, is there any easier and faster way to solve this integral? If so, how? Thank you.
REPLY [7 votes]: HINT
One can look for coefficients of identity
$$f(y)=\frac{3-y}{(1+2y)(1-y^2)}=\frac A{1+2y}+\frac B{1-y}+\frac C{1+y}:$$
$$A=\lim_{y\to -\dfrac12}(1+2y)f(y) =\frac{14}3,$$
$$B=\lim_{y\to 1}(1-y)f(y) = \frac13,$$
$$C=\lim_{y\to-1}(1+y)f(y) = -2$$
and then find the integrals through the universal trigonometric substitution and known integrals
$$\int\dfrac{\mathrm dx}{1-\cos(x)}=\dfrac12\int\dfrac{\mathrm dx}{\sin^2\left(\dfrac x2\right)} = -\cot\left(\dfrac x2\right)+constant,$$
$$\int\dfrac{\mathrm dx}{1+\cos(x)}=\dfrac12\int\dfrac{\mathrm dx}{\cos^2\left(\dfrac x2\right)} = \tan\left(\dfrac x2\right)+constant.$$<|endoftext|>
TITLE: Prove the recurrence $x_{n}=\frac{x_{n-2}}{1+x_{n-1}}$ converges for a unique $x_1$
QUESTION [8 upvotes]: While reading Steven Finch's amazing book Mathematical Constants I once encountered Grossman's constant. This is an interesting constant $c$ defined as the unique $x_1\in\mathbb{R}$ such that the sequence $\{x_n\}_{n=0}^\infty$ defined by the recurrence:
$$x_{n}=\frac{x_{n-2}}{1+x_{n-1}}$$
for $n\ge0$ with $x_0=1$ converges, where $c\approx$$\;0.73733830336929...$. This seems like quite a remarkable theorem and I have no idea how to go about proving that a recurrence of this form converges for a single value, although it seems to have something to do with the limiting behaviour of the odd and even terms. I do not have access to the paper referenced by Finch and MathWorld in which the proof is apparently given, so I am wondering at the very least what techniques were used to prove it.
My question is: Does anyone know of (or can come up with) a proof (or even the idea of a proof) that this sequence converges for a unique $x_1$? Also, is any closed form for $c$ yet known?
REPLY [2 votes]: This is not an answer but here is a collection of facts about the sequences :
If $x_0,x_1 \ge 0$ then $x_n \ge 0$ forall $n$, and $x_{n+2} = \frac{x_n} {1+x_{n+1}} \le x_n$, so that the two sequences
$(x_{2n})$ and $(x_{2n+1})$ are decreasing, so they have limits $l_0$ and $l_1$.
If the limit of one of the subsequences is nonzero, then the other sequence converges to $0$ exponentially, so one of them has to be $0$. Then we have to prove that forall $x_0 \ge 0$ there is a unique $x_1 \ge 0$ such that the sequence converges to $0$.
A long computation shows that
$(x_{n+3} - x_{n+2}) - (x_{n+1} - x_n) = \frac {x_n^2 x_{n+1}}{(1+x_{n+1})(1+x_n+x_{n+1})} \ge 0$,
and so the sequences $(x_{2n+1}-x_{2n})$ and $(x_{2n+2}-x_{2n+1})$ are increasing.
In particular, as soon as one of them gets positive, we know that the sequence will not converge.
Conversely, if $(x_{2n})$ doesn't converge to $0$ then $(x_{2n+1})$ converges to $0$ and so we must have $x_{2n+1} - x_{2n} > 0$ at some point, and similarly for the other case.
This means that $(x_n)$ converges to $0$ if and only if it stays decreasing forever, and we can decide if a particular sequence doesn't converge to $0$ by computing the sequence until it stops decreasing.
It also follows that the set $\{(x_0,x_1) \in\Bbb R_+^2\mid \lim x_n = 0\}$ is a closed subset of $\Bbb R_+^2$.<|endoftext|>
TITLE: there are infnitely many postive integer $n$ such $ \lfloor \sqrt{7}\cdot n \rfloor=k^2+1(k\in \Bbb{Z})$
QUESTION [7 upvotes]: show that: there are infnitely many postive integer $n$ such $$ \lfloor \sqrt{7}\cdot n \rfloor=k^2+1(k\in \Bbb{Z})$$
I think use pell equation to solve it. But I can't.
REPLY [6 votes]: I get that
there are an infinite number of $n$
such that
$\lfloor n\sqrt{d} \rfloor
=k^2-1
$,
not $k^2+1$.
However,
for $d$ such that
there are solutions to
$x^2-dy^2 = -3$,
such as $d=7$,
then there are $n$
such that
$\lfloor n \sqrt{d} \rfloor
= k^2+1
$.
This generalizes to $k^2 \pm j$
depending on the existence
of solutions to
$x^2-dy^2 = \pm m$
for different $m$.
Here we go.
As the OP stated,
the Pell equation comes into it.
We start with the fact that
there are an infinite
number of integer solutions to
$x^2-dy^2 = 1$,
where $d$ is square free.
For each of these,
$1
=x^2-dy^2
=(x-y\sqrt{d})(x+y\sqrt{d})
$
so
$(x-y\sqrt{d})
=\dfrac1{x+y\sqrt{d}}
$
or,
squaring,
$x^2-2xy\sqrt{d}+dy^2
=\dfrac1{(x+y\sqrt{d})^2}
$
or
$2xy\sqrt{d}
=x^2+dy^2-\dfrac1{(x+y\sqrt{d})^2}
=x^2+(x^2-1)-\dfrac1{(x+y\sqrt{d})^2}
=2x^2-1-\dfrac1{(x+y\sqrt{d})^2}
$
or
$xy\sqrt{d}
=x^2-\frac12(1+\dfrac1{(x+y\sqrt{d})^2})
$.
Since
$0 < \dfrac1{(x+y\sqrt{d})^2})
< \frac12$,
$\frac12
< \frac12(1+\dfrac1{(x+y\sqrt{d})^2})
< 1
$
so
$\lfloor xy\sqrt{d} \rfloor
=\lfloor x^2-\frac12(1+\dfrac1{(x+y\sqrt{d})^2}) \rfloor
= x^2-\lfloor\frac12(1+\dfrac1{(x+y\sqrt{d})^2}) \rfloor
= x^2-1
$.
This is not what is asked.
However,
if there is one solution to
$x^2-dy^2 = -1$,
then there are
an infinite number of solutions.
Modifying this calculation
we get
$x^2-2xy\sqrt{d}+dy^2
=\dfrac1{(x+y\sqrt{d})^2}
$
or
$2xy\sqrt{d}
=x^2+dy^2-\dfrac1{(x+y\sqrt{d})^2}
=x^2+(x^2+1)-\dfrac1{(x+y\sqrt{d})^2}
=2x^2+1-\dfrac1{(x+y\sqrt{d})^2}
$
or
$xy\sqrt{d}
=x^2+\frac12(1-\dfrac1{(x+y\sqrt{d})^2})
$
$\lfloor xy\sqrt{d} \rfloor
=\lfloor x^2+\frac12(1-\dfrac1{(x+y\sqrt{d})^2}) \rfloor
= x^2+\lfloor\frac12(1-\dfrac1{(x+y\sqrt{d})^2}) \rfloor
= x^2
$.
However,
there are no solutions to
$x^2-7y^2 = -1$,
so this does not hold.
However,
suppose there are
an infinite number of solutions to
$x^2-dy^2 = -m$.
Modifying this calculation
we get
$-m
=x^2-dy^2
=(x-y\sqrt{d})(x+y\sqrt{d})
$
or
$x-y\sqrt{d}
=\dfrac{-m}{x+y\sqrt{d}}
$.
Squaring,
$x^2-2xy\sqrt{d}+dy^2
=\dfrac{m^2}{(x+y\sqrt{d})^2}
$
or
$2xy\sqrt{d}
=x^2+dy^2-\dfrac{m^2}{(x+y\sqrt{d})^2}
=x^2+(x^2+m)-\dfrac{m^2}{(x+y\sqrt{d})^2}
=2x^2+m-\dfrac{m^2}{(x+y\sqrt{d})^2}
$
or
$xy\sqrt{d}
=x^2+\frac12(m-\dfrac{m^2}{(x+y\sqrt{d})^2})
$
so
$\lfloor xy\sqrt{d} \rfloor
=\lfloor x^2+\frac12(m-\dfrac{m^2}{(x+y\sqrt{d})^2}) \rfloor
= x^2+\lfloor\frac12(m-\dfrac{m^2}{(x+y\sqrt{d})^2}) \rfloor
$.
If $m$ is odd,
$m = 2j+1$,
then
$\lfloor xy\sqrt{d} \rfloor
= x^2+\lfloor\frac12(2j+1-\dfrac{m^2}{(x+y\sqrt{d})^2}) \rfloor
=x^2+j
$
once
$x+y\sqrt{d}
> m
$.
Since there are solutions to
$x^2-7y^2 = -3$
(e.g.,
$5^2-7\cdot 2^2 = -3$)
there are an infinite number of solutions,
so OP's statement is true.<|endoftext|>
TITLE: All pair of $m,n$ satisfying $lcm(m,n)=600$
QUESTION [5 upvotes]: Find the number of pairs of positive integers $(m,n)$, with $m \le n$, such that
the ‘least common multiple’ (LCM) of $m$ and $n$ equals $600$.
My tries:
It's very clear that $n\le600$, always.
Case when $n=600=2^3\cdot 3\cdot 5^2$, and let $m=2^{k_1}\cdot 3^{k_2}\cdot 5^{k_3}$, all possible values of $k_1=3+1=4,\ k_2=1+1=2,\ k_3=2+1=3$. So number of $m$ which satisfy above will be $4\cdot 2 \cdot 3=24$
Help me analyzing when $n<600$.
REPLY [4 votes]: Forget about the condition $m\leq n$ for the moment. Since $600=2^3\cdot 3^1\cdot 5^2$ we have
$$m=2^{\alpha_2}3^{\alpha_3}5^{\alpha_5},\quad n=2^{\beta_2}3^{\beta_3}5^{\beta_5}$$
with $\alpha_i$, $\beta_i\geq0$ and
$$\max\{\alpha_2,\beta_2\}=3,\quad \max\{\alpha_3,\beta_3\}=1,\quad \max\{\alpha_5,\beta_5\}=2\ .$$
It follows that
$$\eqalign{(\alpha_2,\beta_2)&\in\{(0,3),(1,3),(2,3),(3,3),(3,2),(3,1),(3,0)\}\>,\cr
(\alpha_3,\beta_3)&\in\{(0,1),(1,1),(1,0)\}\>,\cr
(\alpha_5,\beta_5)&\in\{(0,2),(1,2),(2,2),(2,1),(2,0)\}\cr}$$
are admissible, allowing for $7\cdot3\cdot5=105$ combinations. Exactly one of them has $m=n$, namely $m=n=600$, and in all other $104$ cases $m\ne n$. Since we want $m\leq n$ we have to throw out half of these cases, leaving $52+1=53$ different solutions of the problem.<|endoftext|>
TITLE: Why are partial derivatives of a harmonic function also harmonic?
QUESTION [5 upvotes]: I've tried manipulating some expressions but still can't quite get my head around why the partial derivatives of $u(x,y)$, a harmonic function, are also harmonic.
REPLY [7 votes]: A harmonic function satisfies
$\sum_i\frac{\partial^2 f}{\partial x_i^2}=0$
Let's take a look at $g=\frac{\partial f}{\partial x_j}$ for some $j$. Then
\begin{align*}
\sum_i\frac{\partial^2 g}{\partial x_i^2}&=\sum_i\frac{\partial^3 f}{\partial x_i^2\partial x_j}\\
&=\sum_i\frac{\partial}{\partial x_j}\left(\frac{\partial^2 f}{\partial x_i^2}\right)\\
&=\frac{\partial}{\partial x_j}\sum_i\left(\frac{\partial^2 f}{\partial x_i^2}\right)\\
&=\frac{\partial}{\partial x_j}0\\
&=0
\end{align*}
and thus $g$ is harmonic.
This is of course assuming that the third partial derivatives of $f$ are well-defined.
Additional note: This proof requires that the order in which the partial derivatives are taken does not matter, i.e. $\frac{\partial^3 f}{\partial x_i^2\partial x_j} = \frac{\partial^3 f}{\partial x_j\partial x_i^2}$. I believe that this is not generally true (it does hold if the third partial derivative is continuous), so the proof only works for those functions where this is true.
REPLY [3 votes]: Let $u$ be harmonic. If we investigate wether $u_x$ is harmonic we have to suppose that $u_x \in C^2$, hence we suppose $u \in C^3$. Let $v=u_x$
From $u_{xx}+u_{yy}=0$ we get by differentiatin w.r.t $x$:
$u_{xxx}+u_{yyx}=0$.
But this means: $v_{xx}+v_{yy}=0$ .<|endoftext|>
TITLE: Why do we only consider ideals with a prime norm when looking at ideals smaller than the Minkowski bound?
QUESTION [5 upvotes]: I was working on some examples on how to compute the class number of a quadratic number field: I do understand that for some quadratic number field $\mathbb{Q}(\sqrt{d})$, with $d\in \mathbb{Z}, d \neq \{0,1\}$ and squarefree, I need to compute the Minkowski bound and then look at ideals with norm smaller than the Minkowski bound, which then gives me the class number.
However, I was wondering why I only look at ideals with a prime number as the norm? One example was the number field $K:=\mathbb{Q}(\sqrt{-163})$ (which has class number $1$) where I computed that in every class of the class group, there exists some ideal $\mathfrak{A}$ with $N(\mathfrak{A}) < 8.1$. I could easily compute that there are no ideals with norm $2,3,5$ or $7$ as those are inert in $\mathcal{O}_K$.
Now I'm having some trouble to understand why I only need to look at those ideals and ignore those with norm $4,6$ or $8$ (as was done in the example). I assume that there's maybe some argument working with the factorization of ideals into prime ideals (which works in $\mathcal{O}_K$ as a Dedekind domain). I also looked up other questions on this topic but did unfortunately not find a definite answer.
Thank you for your help and explanations!
REPLY [3 votes]: You do not, you only consider prime ideals, which may or may not have prime norm. The reason we only consider the prime ideals is because the ideal group is generated by the primes (that's what unique factorization means!) so if all the generators are principal, any product of them is principal. So it is necessary and sufficient that all prime ideals be principal in order to show the ring is a PID.<|endoftext|>
TITLE: If the graphs of $f(x)$ and $f^{-1}(x)$ intersect at an odd number of points, is at least one point on the line $y=x$?
QUESTION [7 upvotes]: I was reading about intersection points of $f(x)$ and $f^{-1}(x)$ in this site. (Proof: if the graphs of $y=f(x)$ and $y=f^{-1}(x)$ intersect, they do so on the line $y=x$)
Then, I saw this statement was wrote by N. S.: "If the graphs of $f(x)$ and $f^{-1}(x)$ intersect at a single point, then that point lies on the line $y=x$.
It is also true that if the graphs of $f(x)$ and $f^{-1}(x)$ intersect at an odd number of points, then at least a point point lies on the line $y=x$. This follows immediately from the observation that the intersection points are symmetric with respect to that line..."
I want to know whether it's true or not and if it's true how we can prove it algebraically?
My try: I tried many function and this statement was true but I can't prove it. (Or disprove.)
REPLY [2 votes]: The argument is mainly a counting argument involving just a little algebra on the functions themselves.
The functions $f(x)$ and $f^{-1}(x)$ intersect either at a finite number of points, or an infinite number of points.
If the number of intersections is infinite, it is neither odd nor even.
So we only need to consider a finite number of intersections.
If $(p,q)$ is one of the intersection points, that means
$q = f(p) = f^{-1}(p).$
But from $q=f(p)$ we can deduce that $f^{-1}(q) = p,$
and from $q = f^{-1}(p)$ we can deduce that $f(q) = p,$
therefore $p = f(q) = f^{-1}(q),$ that is,
and the two functions also intersect at $(q,p).$
So consider the set of intersection points that are above the line $y=x.$
Suppose there are $n$ of these points, where $n \geq 0.$
For each point $(p,q)$ above the line $y=x$
(that is, where $q>p$), there is a corresponding point $(q,p)$ below the line $y=x,$ and vice versa.
Hence there are $n$ points below the line $y=x.$
Let the number of intersection points on the line $y=x$ be $m.$
Then the total number of intersection points is $n$ above the line, $n$ below the line, and $m$ on the line (where $m\geq 0$), for a total of
$$
2n + m.
$$
Now, $m$ has the same parity as $2n+m.$
If the total number of intersections $2n+m$ is odd,
it follows that $m$ is odd.
But zero is not odd; all non-negative odd numbers are positive.
So the total number of intersections on the line $y=x$ in that case
is an odd positive number.
In particular, it is at least $1.$
In this proof, we never assume there are any intersection points above the line $y=x,$ nor that there are any below the line or on the line.
But we show that if there are an odd number of intersection points altogether, the number of intersection points on the line is positive.<|endoftext|>
TITLE: Natural isomorphism of hom functors imply isomorphism of objects
QUESTION [6 upvotes]: Let $\mathcal{C}$ be a category. Let $A$ and $B$ be objects of $\mathcal{C}$. If we have an isomorphism natural in $-$: $\mathcal{C}(-,A)\cong\mathcal{C}(-,B)$; does that imply $A\cong B$?
REPLY [4 votes]: Yes (as the other answer points out). But here's a proof that does not invoke Yoneda.
It is given that $\mathcal C(-, A) \cong \mathcal C(-, B)$ naturally in $-$.
Plugging in $A$: $\mathcal C(A, A) \cong \mathcal C(A, B)$ naturally in $A$.
But $1_A$ is an element of $\mathcal C(A, A)$, and is an isomorphism, and the natural isomorphism from $\mathcal C(A, A)$ to $\mathcal C(A, B)$ must map this to an isomorphism in $\mathcal C(A, B)$, which means $A \cong B$.
To prove that last claim, let $\alpha \colon \mathcal C(-, A) \Rightarrow \mathcal C(-, B)$ be a natural isomorphism, and $\alpha^{-1} \colon \mathcal C(-,B) \to \mathcal C(-, A)$ its inverse. Let $f = \alpha_A(1_A) \in \mathcal C(A, B)$. The naturality square is then as given below.
$\require{AMScd}$
\begin{CD}
\mathcal C(A, A) @>{\alpha_A}>> \mathcal C(A, B)\\
@A(-\circ f)AA @AA(- \circ f)A \\
\mathcal C(B,A) @>>{\alpha_B}> \mathcal C(B, B)
\end{CD}
Here $- \circ f$ is the function $\mathcal C(f, A)$ (and also $\mathcal C(f, B)$) whose action is to take a morphism from $\mathcal C(A,A)$ (or $\mathcal C(A,B)$) and compose it with $f$.
We must find an inverse $g \colon B \to A$ for $f$, and an obvious choice is $g = \alpha_B^{-1}(1_B) \in \mathcal C(B, A)$. So now we just need to verify that these are inverses.
First, observe from the naturality square that
\begin{equation*}
\alpha_A \circ (- \circ f) = (- \circ f) \circ \alpha_B = (\alpha_B(-)) \circ f.
\end{equation*}
Applying these functions to $g$ (which is an element of $\mathcal C(B, A)$), we get \begin{align*}
\alpha_A(g \circ f) &= \alpha_B(g) \circ f\\
&= 1_B \circ f\\
&= f.
\end{align*}
Then $g \circ f = \alpha_A^{-1}(f) = 1_A$.
Similarly, $f \circ g = 1_B$.<|endoftext|>
TITLE: Direct limit of infinite direct products mapped onto each other via shift maps
QUESTION [5 upvotes]: While working on a project, I ended up having to take direct limits, for which I admit I don't have a good intuition. Hoping that it is a simple problem for those who have more experience with direct limits than I do, I decided to ask it here.
Let $(G_i)_{i \in \mathbb{N}}$ be a sequence of abelian groups such that $G_i \subseteq G_{i+1}$ and consider the directed system
$G_0 \times G_1 \times \dots \rightarrow_{\varphi_0} G_1 \times G_2 \times \dots \rightarrow_{\varphi_1} \dots$
where each homomorphism $\varphi_k$ is given by $(\alpha,\beta,\gamma,\delta,\dots) \mapsto (\alpha\beta, \gamma, \delta, \dots)$. Is it possible to describe the direct limit of this system in terms of standard group theoretic constructions? I am aware of the standard construction of direct limits by taking an appropriate quotient of the disjoint union. I was hoping that there is a "simpler" description which avoids disjoint unions.
Here is what I was able to think so far. Consider the system
$G_1 \times G_2 \dots \rightarrow_{\psi_0} G_2 \times G_3 \dots \rightarrow_{\psi_1} \dots$
where each homomorphism $\psi_k$ is the left shift map given by $(\alpha,\beta,\gamma,\delta,\dots) \mapsto (\beta, \gamma, \delta, \dots)$. If I am not mistaken, the direct limit of this system should be the reduced product of the groups $G_i$ along the cofinite filter, where the isomorphism takes any element in the direct limit to the equivalence class of the appropriate sequence.
This suggests that the reduced product should be a part of the original direct limit I am considering. However, I can't really figure out how the first component that I discarded is going to interact with the reduced product.
REPLY [2 votes]: First of all, there is a much simpler description of direct limits in the case where all of the bonding maps are surjective. In particular, if
$$
G_0 \,\xrightarrow{\varphi_0}\, G_1 \,\xrightarrow{\varphi_1}\, G_2 \,\xrightarrow{\varphi_2}\, \cdots
$$
is a directed system of groups and epimorphisms, then the direct limit is a quotient $G_0/N$, where $N$ is the following normal subgroup of $G_0$:
$$
N \,=\, \{g\in G_0 \mid \varphi_n\cdots\varphi_1\varphi_0(g) = 1\text{ for some }n\in\mathbb{N}\} \,=\, \bigcup_{n\in\mathbb{N}} \ker(\varphi_n\cdots \varphi_1\varphi_0).
$$
For the directed system you have given, it follows that the direct limit is the quotient
$$
(G_0\times G_1 \times \cdots )\,\bigr/\,N
$$
where $N$ is the subgroup of the infinite direct sum $G_0 \oplus G_1 \oplus \cdots$ consisting of all tuples $(g_0,g_1,\ldots,g_n,1,1,1\ldots)$ for which $g_0g_1\cdots g_n = 1$.<|endoftext|>
TITLE: Calculate $\int_0^\infty {\frac{x}{{\left( {x + 1} \right)\sqrt {4{x^4} + 8{x^3} + 12{x^2} + 8x + 1} }}dx}$
QUESTION [14 upvotes]: Prove
$$I=\int_0^\infty {\frac{x}{{\left( {x + 1} \right)\sqrt {4{x^4} + 8{x^3} + 12{x^2} + 8x + 1} }}dx} = \frac{{\ln 3}}{2} - \frac{{\ln 2}}{3}.$$
First note that
$$4{x^4} + 8{x^3} + 12{x^2} + 8x + 1 = 4{\left( {{x^2} + x + 1} \right)^2} - 3,$$
we let
$${x^2} + x + 1 = \frac{{\sqrt 3 }}{{2\cos \theta }} \Rightarrow x = \sqrt { - \frac{3}{4} + \frac{{\sqrt 3 }}{{2\cos \theta }}} - \frac{1}{2},$$
then
$$I=\frac{1}{2}\int_{\frac{\pi }{6}}^{\frac{\pi }{2}} {\frac{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} - 1} \right)\sec \theta }}{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} + 1} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}d\theta } .$$
we have
\begin{align*}
&\frac{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} - 1} \right)\sec \theta }}{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} + 1} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }} = \frac{{{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} - 1} \right)}^2}\sec \theta }}{{\left( {2\sqrt 3 \sec \theta - 4} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}\\
=& \frac{{\left( {2\sqrt 3 \sec \theta - 2 - 2\sqrt {2\sqrt 3 \sec \theta - 3} } \right)\sec \theta }}{{\left( {2\sqrt 3 \sec \theta - 4} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }} = \frac{{\left( {\sqrt 3 \sec \theta - 1 - \sqrt {2\sqrt 3 \sec \theta - 3} } \right)\sec \theta }}{{\left( {\sqrt 3 \sec \theta - 2} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}\\
= &\frac{{\left( {\sqrt 3 \sec \theta - 1} \right)\sec \theta }}{{\left( {\sqrt 3 \sec \theta - 2} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }} - \frac{{\sec \theta }}{{\sqrt 3 \sec \theta - 2}}.
\end{align*}
and
$$\int {\frac{{\sec \theta }}{{\sqrt 3 \sec \theta - 2}}d\theta } = \ln \frac{{\left( {2 + \sqrt 3 } \right)\tan \frac{\theta }{2} - 1}}{{\left( {2 + \sqrt 3 } \right)\tan \frac{\theta }{2} + 1}}+ C.$$
while
\begin{align*}&\int {\frac{{\left( {\sqrt 3 \sec \theta - 1} \right)\sec \theta }}{{\left( {\sqrt 3 \sec \theta - 2} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}d\theta } = \int {\frac{{\sqrt 3 - \cos \theta }}{{\left( {\sqrt 3 - 2\cos \theta } \right)\sqrt {2\sqrt 3 \cos \theta - 3{{\left( {\cos \theta } \right)}^2}} }}d\theta } \\
= &\frac{1}{2}\int {\frac{1}{{\sqrt {2\sqrt 3 \cos \theta - 3{{\left( {\cos \theta } \right)}^2}} }}d\theta } + \frac{{\sqrt 3 }}{2}\int {\frac{1}{{\left( {\sqrt 3 - 2\cos \theta } \right)\sqrt {2\sqrt 3 \cos \theta - 3{{\left( {\cos \theta } \right)}^2}} }}d\theta } .
\end{align*}
But how can we continue? It is related to elliptic integral.
REPLY [5 votes]: This is a pseudo-elliptic integral, it has an elementary anti-derivative:
$$\int \frac{x}{(x+1)\sqrt{4x^4+8x^3+12x^2+8x+1}} dx = \frac{\ln\left[P(x)+Q(x)\sqrt{4x^4+8x^3+12x^2+8x+1}\right]}{6} - \ln(x+1) + C$$
where $$P(x) = 112x^6+360x^5+624x^4+772x^3+612x^2+258x+43$$
and
$$Q(x) = 52x^4+92x^3+30x^2-22x-11$$
To obtain this answer, just follow the systematic method of symbolic integration over simple algebraic extension. Alternatively, you can throw it to a CAS with Risch algorithm implemented (not Mathematica), a convenient software is the online Axiom sandbox.<|endoftext|>
TITLE: If $xy+xz+yz=1+2xyz$ then $\sqrt{x}+\sqrt{y}+\sqrt{z}\geq2$.
QUESTION [10 upvotes]: Let $x$, $y$ and $z$ be non-negative numbers such that $xy+xz+yz=1+2xyz$. Prove that:
$$\sqrt{x}+\sqrt{y}+\sqrt{z}\geq2$$
The equality occurs for $x=y=1$ and $z=0$.
I tried Lagrange Multipliers and more, but I don't see a proof.
REPLY [2 votes]: Short proof.
Clearly $xy+yz+zx \ge 1$
We have by AM-GM
$$(\sqrt{x}+\sqrt{y}+\sqrt{z})^2=x+y+z+2(\sqrt{xy}+\sqrt{yz}+\sqrt{zx})$$
$$\ge x+y+z+ \frac{4xy}{x+y}+\frac{4yz}{y+z}+\frac{4zx}{z+x} $$
$$ \ge x+y+z + \frac{4(xy+yz+zx)}{x+y+z} \ge x+y+z+\frac{4}{x+y+z} \ge 4$$
The proof is complete.<|endoftext|>
TITLE: Looking for references about a tessellation of a regular polygon by rhombuses.
QUESTION [17 upvotes]: A regular polygon with an even number of vertices can be tessellated by rhombii (or lozenges), all with the same sidelength, with angles in arithmetic progression as can be seen on figures 1 to 3.
Fig. 1
Fig. 2
Fig. 3
I had already seen this kind of tessellation, and I met it again in a recent question on this site (Tiling of regular polygon by rhombuses).
Let the polygon be $n$-sided with $n$ even. The starlike pattern of rhombii issued from the rightmost point, that we will call the source, can be seen as successive ''layers'' of similar rhombii. A first layer $R_1$ with the most acute angles (they are $m:=\dfrac{n}{2}-1$ of them), then moving away from the source, a second layer $R_2$ with $m-1$ rhombii, etc. with a grand total of $\dfrac{m(m+1)}{2}$ rhombii.
It is not difficult to show that rhombii in layer $R_p$ are characterized by angles $p\dfrac{\pi}{m+1}.$
In fact (I had no idea of it at first), the rhombii pattern described above is much less mysterious when seen into a larger structure such as shown in figure 4. The generation process is simple: a regular polygon with $m$ sides is rotated by successive rotations with angle $\dfrac{\pi}{m+1}$ around one of its vertices.
Fig. 4
My question about this tessellation is twofold:
where can I find some references?
are there known properties/applications?
The different figures have been produced by Matlab programs. The program that has generated Fig. 2 is given below ; it uses complex numbers, especially apt to render angular relationships:
hold on;axis equal
m=9;n=2*m+2;
i=complex(0,1);pri=exp(2*i*pi/n);
v=pri.^(0:(n-1));
for k=0:m-1
z=1-(pri^k)*(1-v(1:m+2-k));
plot([z,NaN,conj(z)],'color',rand(1,3),'linewidth',5);
end;
Edit : I am indebted to @Ethan Bolker for attracting my attention to
zonohedra (or zomes, as some architects call them), a 3D extension of Fig. 4 (or an equivalent one with less or more circles) ; by 3D extension, we mean a polyhedron made of (planar) rhombic facets whose projection on $xOy$ plane is the initial figure, as shown on Fig. 5. The idea is simple (we refer here to the two left figures in Fig. 6): the central red "layer" (with the thinnest rhombi) is "lifted" as an umbrella whose highest point, the apex of the zonohedra, say at height $z=1$, with the bottom of the $n$ ribs of the umbrella at $z=1-a$. Let us denote by $V_k, \ k=1, \cdots n$ with components $\left(\cos(\tfrac{2 \pi k}{n}),\sin(\tfrac{2 \pi k}{n}), -a\right)$ the (3D) vectors issued from the apex. Layer $1$ rhombi have sides $V_k$ and $V_{k+1}$ ; by the very definition of a rhombus, layer $2$ (yellow) rhombi have sides $V_k$ and $V_{k+2}$, etc. Note that Fig. 6, unlike Fig. 5, displays a closed zonohedron obtained by gluing 2 identical zonohedra. The right part of Fig. 6 displays the same zonohedron colored in a spiraling way.
Let us remark that there is a degree of freedom, i.e., the way the initial "umbrella" with ribs $V_k$ is more or less open, i.e., $a$ can be chosen.
Fig. 5 : The upper part of a regular zonohedron and its projection onto the horizontal plane.
Fig. 6 : A typical regular zonohedron generated by Minkowski addition of vectors $(\cos(2k \pi/n), \sin(2k \pi/n),1)$ for $k=1,2,...n$ with $n=15$.
Fig. 7 : A rhombic 132-hedron (image borrowed to the Wikipedia article).
See the very educational page on S. Dutch's site :
(https://www.uwgb.edu/dutchs/symmetry/zonohedra.HTM) (sorry: broken link)
About "zomes", a word coined by architects as a condensate of "zonohedra" and "domes", have a look at (http://baselandscape.com/portfolio/the-algarden/) (http://www.structure1.com/zomes-coming-to-the-states/).
Have a look at the article (https://en.wikipedia.org/wiki/Zonohedron) which enlarges the scope ; I have isolated the picture of the rhombic 132-hedron (Fig. 7).
The blog of "RobertLovePi" has stunning illustrations, for example :
(https://robertlovespi.net/2014/02/16/zonohedron-featuring-870-rhombic-faces-of-15-types/).
A general definition of zonotopes (general name for zonohedra) is as a Minkowski addition of segments.
See (http://www.cs.mcgill.ca/~fukuda/760B/handouts/expoly3.pdf). See also the article by Fields medallist Jean Bourgain (https://link.springer.com/content/pdf/10.1007%2FBF02189313.pdf).
A funny article about zomes (http://archive.bridgesmathart.org/2012/bridges2012-545.pdf). "Bridges Organization" promotes connections between mathematics and arts, in particular graphical arts.
See (https://www.encyclopediaofmath.org/index.php/Zonohedron) and references therein.
The zonotopes page on the site of David Eppstein.
The rhombic dodecahedron is a zonohedron that can tessellate the 3D space.
A Geogebra puzzling animation,
A very interesting 19 pages article by Sandor Kabai in the book entitled "Homage to a Pied Puzzler" Ed. Pegg Jr, Alan H. Schoen, Tom Rodgers Editors, AK Peters, 2009 (this book is a tribute to Martin Gardner).
A zonohedron can be "decomposed" as a sum of (hyper) parallelepipeds, giving a way to compute its volume (https://mathoverflow.net/q/349558)
REPLY [3 votes]: Elaborating some more on my previous comment:
Don't know about real applications, but the construction would make a great "proof without words" for this trig identity: $\;\sum_{k=1}^m(m−k+1) \sin \dfrac{k \pi}{m+1}= \dfrac{m+1}{2} \cot \dfrac{\pi}{2(m+1)}\,$.
With OP's notation where $\,n=2(m+1)\,$ is the number of sides of the regular polygon, it can be easily seen that there are $\,m\,$ "bands" of congruent rhombi in the tesselation. From right to left, the first band consists of $\,m\,$ rhombi with an angle of $\,\frac{\pi}{m+1}\,$, then the $k^{th}$ band is made of $\,m-k+1\,$ rhombi with increasing angles $\,\frac{k \pi}{m+1}\,$, all the way to $\,k=m\,$ which is the single leftmost rhombus.
A rhombus with side $\,a\,$ and an angle of $\,\alpha\,$ has an area of $\,a^2 \sin \alpha\,$, and the areas of all bands sum up to the area of the regular polygon $\,\frac{na^2}{4} \cot \frac{\pi}{n}\,$, from which the identity above follows.
Other trigonometric identities can be derived from this tesselation as well. For just one example, in the case of odd $\,m\,$ the horizontal diagonals of the odd-numbered rhombi add up to the diameter of the circumscribed circle $\,\frac{a}{\sin \pi/n}\,$, and therefore $\,\sum_{k=1}^{(m+1)/2} \cos \frac{(2k-1) \pi}{2(m+1)} = \frac{1}{2} \csc \frac{\pi}{2(m+1)}\,$.<|endoftext|>
TITLE: Asymptotic behavior of integral $\int_1^\infty \frac{e^{-xt}}{\sqrt{1+t^2}}dt$ as $x \to 0$
QUESTION [6 upvotes]: I wish to prove that:
$$ \int_1^\infty \frac{e^{-xt}}{\sqrt{1+t^2}}dt \sim - \ln x \quad \mathrm{as} \quad x \to 0^+$$
using the fact that:
$$ f \underset{b}{\sim} g \Rightarrow \int_a^x f \underset{x \to b}{\sim} \int_a^x g$$
if $\int_a^x g \to \infty$ as $x \to b$ and $f$ and $g$ are integrable on every interval $[a,c]$ with $c < b$.
Does anyone have an idea?
Thank you!
REPLY [3 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\left.\int_{1}^{\infty}{\expo{-xt} \over \root{1 + t^{2}}}\,\dd t\,
\right\vert_{\ x\ >\ 0} =
\int_{1}^{\infty}{\expo{-xt} \over t}\,\dd t +
\int_{1}^{\infty}\expo{-xt}\pars{{1 \over \root{1 + t^{2}}} - {1 \over t}}
\,\dd t
\\[5mm] & =
x\int_{1}^{\infty}\ln\pars{t}\expo{-xt}\,\dd t
-
\int_{1}^{\infty}{\expo{-xt} \over t\root{1 + t^{2}}\pars{\root{1 + t^{2}} + t}}
\,\dd t
\\[1cm] & =
-\ln\pars{x}\expo{-x}
\\[5mm] & + \int_{x}^{\infty}\ln\pars{t}\expo{-t}\,\dd t + \int_{1}^{\infty}\ln\pars{t}\expo{-t}\,\dd t -
\int_{1}^{\infty}{\expo{-xt} \over t\root{1 + t^{2}}\pars{\root{1 + t^{2}} + t}}
\,\dd t
\end{align}
The 'remaining' integrals are finite as $\ds{x \to 0^{+}}$ such that
$$\bbx{\ds{%
\int_{1}^{\infty}{\expo{-xt} \over \root{1 + t^{2}}}\,\dd t \sim -\ln\pars{x}
\quad\mbox{as}\ x \to 0^{+}}}
$$<|endoftext|>
TITLE: How to find length of a part of a curve?
QUESTION [27 upvotes]: How can I find the length of a curve, for example $f(x) = x^3$, between two limits on $x$, for example $1$ and $8$?
I was bored in a maths lesson at school and posed myself the question:
What's the perimeter of the region bounded by the $x$-axis, the lines $x=1$ and $x=8$ and the curve $y=x^3$?
Of course, the only "difficult" part of this is finding the length of the part of the curve between $x=1$ and $x=8$.
Maybe there's an established method of doing this, but I as a 16 year-old calculus student don't know it yet.
So my attempt at an approach was to superimpose many triangles onto the curve so that I could sum all of their hypotenuses.
Just use many triangles like the above,
$$
\lim_{\delta x\to 0}\frac{\sqrt{\left(1+\delta x-1\right)^2+\left(\left(1+\delta x\right)^3-1^3\right)^2}+\sqrt{\left(1+2\delta x-\left(1+\delta x\right)\right)^2+\left(\left(1+2\delta x\right)^3-\left(1+\delta x\right)^3\right)^2}+\cdots}{\frac7{\delta x}}
$$
I'm not entirely sure if this approach is correct though, or how to go on from the stage I've already got to.
REPLY [6 votes]: Your idea to use right triangles is good! It might be easier to look at a general curve than $y=x^3$, though, to build the machinery.
Let's say we wish to measure the arclength between $x=a$ and $x=b$. Choose $N+1$ points $\{P_1,P_2,\dots,P_{N+1}\}$ to partition the curve into $N$ sections. In the graphic I chose $N=6$ such that I had $7$ points.
The distance between two points, $P_{n}$ and $P_{n+1}$ is essentially the Pythagorean theorem.
$$ P_{n}P_{n+1} = \sqrt{(\Delta x)^2 + (\Delta y)^2}$$
We can estimate the arclength by summing the distances between these points.
$$\sum_{n=1}^N P_nP_{n+1} = \sum_{n=1}^{N} \sqrt{(\Delta x_n)^2 + (\Delta y_n)^2}$$
As we take the number of partitioning points to infinity, or $N\to\infty$, we have $\Delta x\to dx$ and $\Delta y\to dy$. We call the resulting differential distance $ds$. It is a measure of infinitesimal length.
\begin{align}
ds &= \sqrt{dx^2 + dy^2}
\\
&= \sqrt{1 + \left(\frac{dy}{dx}\right)^2}dx
\\
&= \sqrt{1 + \left(f'(x)\right)^2}dx
\end{align}
In the limit, the sum above becomes the integral you seek (i.e. we are now summing up infinitely many infinitesimal arc length elements along the curve).
$$\sum_{n=1}^{N} P_nP_{n+1} \to \int_{P_1}^{P_{N+1}} ds = \int_a^b \sqrt{1 + \left(f'(x)\right)^2}dx$$
You ought to find a similar dialogue in Stewart's or Thomas's Calculus. Look in the index for arc length integral.<|endoftext|>
TITLE: How many sets $(A_1,A_2,\cdots, A_k)$ which are subsets of $\{1,2,\cdots ,n\}$
QUESTION [5 upvotes]: For given natural numbers $n,k$, how many are there $k$-tuples $(A_1,A_2,\cdots ,A_k)$ such that
$$A_1\subseteq A_2\subseteq\cdots \subseteq A_k\subseteq \{1,2,3,\cdots ,n\}$$
I've thought to prove by induction on $k$ that the number of $k$-tuples is equal to
$$\sum_{t=0}^n{n\choose t}k^t=(k+1)^n$$
Though I have no idea what happens when you add another subset,my idea was when I add another subset to let it be $A_1$ and shift every other subset index by $1$.Then split it into $n+1$ cases such that $|A_2|=t$ for each $t$ from $0$ to $n$.
Maybe there is a better way.
REPLY [4 votes]: The easiest solution is certainly the one in the comments, but we can approach the problem using induction in the following way:
The first observation is that the only important property of the set $\lbrace 1, 2, \dots, n \rbrace$ in the problem is that it has $n$ elements. We would have the same answer if we replaced it with any other set with $n$ elements.
We now use induction on $k$. For the base case, we consider $k = 1$. If we only want a single subset
$$ A_1 \subseteq \lbrace 1, 2, \dots, n \rbrace $$
then $A_1$ can be any of the $2^n = {(1+1)}^n$ subsets of $\lbrace 1, 2, \dots, n \rbrace$, and so the number of subsets in this case is indeed ${(k+1)}^n$.
Now suppose that the result is true for some $k$ and for every $n$. We wish to then prove that the number of ways of choosing sets $A_1, A_2, \dots, A_{k+1}$ such that
$$ A_1 \subseteq A_2 \subseteq \dots \subseteq A_{k+1} \subseteq \lbrace 1, 2, \dots, n \rbrace $$
is equal to ${(k+2)}^n$.
We count the number of ways of choosing the subsets by considering the number of elements in $A_{k+1}$. Let this number be $m$. Then there are $\binom{n}{m}$ ways to choose the elements in $A_{k+1}$.
Now $A_{k+1}$ is a set with $m$ elements, so by our earlier observation that the set $\lbrace 1, 2, \dots, n \rbrace$ is arbitrary, we can see that once we have chosen $A_{k+1}$, the number of ways of choosing sets $A_1, A_2, \dots, A_k$ such that
$$ A_1 \subseteq A_2 \subseteq \dots \subseteq A_k \subseteq A_{k+1} $$
is equal to ${(k+1)}^m$.
Thus the number of ways of choosing sets $A_1, A_2, \dots, A_{k+1}$ such that
$$ A_1 \subseteq A_2 \subseteq \dots \subseteq A_{k+1} \subseteq \lbrace 1, 2, \dots, n \rbrace $$
and such that $A_{k+1}$ has $m$ elements is equal to
$$ \binom{n}{m} {(k+1)}^m. $$
We see that the total number of ways of choosing the sets $A_1, A_2, \dots, A_{k+1}$ is then equal to
$$ \sum_{m=0}^{n} \binom{n}{m} {(k+1)}^m $$
which by the binomial theorem is equal to ${(k+2)}^n$.<|endoftext|>
TITLE: Is there a notion of "normal subcategory" analgous to the notion of normal subgroup?
QUESTION [5 upvotes]: The idea of a replete subcategory to me seems very analogous to the idea of a characteristic subgroup, as both are, in some sense, subobjects invariant under a notion of equivalence (categorical equivalence for replete subcategories, automorphisms for characteristic subgroups). This led me to the question: Is there a notion in category theory that is similarly analogous to that of a normal subgroup in group theory?
REPLY [3 votes]: Despite the negativity of comments on this question, considering how the concept of normal subgroup extends to other categories is a very fruitful pastime, and I'll give a flavour of a few of generalizations. Since it seems to have been the motivation for asking this, I'll also explain how they apply to the ($1$-)category of categories.
The category of groups has the rather special property of admitting zero morphisms: between any pair of groups, there is a homomorphism sending everything to the identity element. A subgroup $N \leq G$ is normal if there is some homomorphism $h: G \to H$ such that the equalizer of $h$ and the zero morphism $0: G \to H$ is precisely the subgroup $N$.
Of course, not every category has zero morphisms. The most direct generalization in a category with equalizers is that of regular subobject, where we just ask for expressibility of the subobject as an equalizer of some pair of morphisms.
$$N \hookrightarrow G \rightrightarrows H$$
Characterizing regular subcategories of categories is a little tricky (and trying to find references to them in the literature is tricky because regular categories refer to something else!) They are subcategories, as you would expect, but whenever we have morphisms $g: A \to C$ and $u:B \to C$ in a regular subcategory and morphisms $f:A \to B$, $v: C \to B$ with $u \circ f = g$ and $v \circ u = \mathrm{id}_B$, then we must also have $f$ in the regular subcategory, since knowing that $Fg = Gg$ and $Fu = Gu$ for a pair of functors $F,G$ means that $Fu \circ Ff = F(g) = G(g) = Gu \circ Gf = Fu \circ Gf$, whence $Ff = Gf$; the dual property is also necessary{*}. This in particular means that whenever a morphism of a regular subcategory has a two-sided inverse in the larger category, the regular subcategory must contain that inverse. However, it is a bit weaker than being closed under conjugation: when we view the alternating group $A_5$ as a one-object category, the equalizer of the identity homomorphism with the homomorphism obtained by conjugation with the element $(1 2)(3 4)$ is a non-trivial subgroup (which is clearly not a normal subgroup). In particular, there are regular subobjects in the category of groups which are not normal subgroups, so this is a weak generalization.
Alternatively, we could observe that in the category of groups, the zero morphisms are those which factor through the trivial group, which is a 'zero object' in the category of groups, being both initial and terminal. As such, another way to express normal subgroups of $G$ are those which can be expressed as a kernel: the pullback along some group homomorphism $G \to H$ of the unique homomorphism $0 \to H$.
$$\require{AMScd}
\begin{CD}
N @>>> G;\\
@VVV @VVV \\
0 @>>{!}> H;
\end{CD}$$
This only uses the fact that $0$ is an initial object, so we can extend this definition to any category with an initial object and pullbacks. Unfortunately, this doesn't extend well to the 1-category of categories, since this has a strict initial object: the empty category. If we pull back the unique morphism from the empty category, we will always get the empty subcategory...
All is not lost, however. We can also note that a subgroup is normal if and only if it is the fiber of its cokernel, or in other words if the pushout of the inclusion of the subgroup along the unique morphism to the zero object produces a pullback square.
$$\require{AMScd}
\begin{CD}
N @>>> G;\\
@V{!}VV @VVV \\
1 @>>> H;
\end{CD}$$
This time we are using the fact that the zero object is a terminal object, and this definition makes sense as soon as we have a terminal object and pushouts. This is stronger than the definition of regular subobject I gave earlier, since $N \hookrightarrow G$ is automatically the equalizer of the given morphism $G \to H$ and the constructed morphism $G \to 1 \to H$ in this scenario. Note that we couldn't just define normal subobjects to be pullbacks of a morphism $1 \to H$ since in general there may be several (or none) of these to choose from; using a pushout in this definition gives us a canonical choice. This definition coincides with the usual one for groups, even when we view groups as one-object categories living in the 1-category of categories. A normal subcategory in this sense has the 2-out-of-3 property and has morphisms which are closed under conjugation by any isomorphisms in the larger category between its objects{*}.
There are further characterizations one could generalize, like closure under inner automorphisms, although that involves exploiting some 2-categorical structure (to extend the notion of inner automorphism to categories, where conjugation by a single element doesn't directly make sense, we would use the fact that in the 2-category of groups these correspond to automorphisms which are naturally isomorphic to the identity automorphism). As always in category theory, which generalization is the right one depends on what you want to do with it, but I encourage you to have fun seeing what else you can find!
{*} Note that I have not proven, nor do I know, whether the properties I give are sufficient as well as necessary.<|endoftext|>
TITLE: $\lim_{n\to\infty}\int_{-\infty}^{\infty}{\sin(n+0.5)x\over \sin(x/2)}\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}$
QUESTION [8 upvotes]: How can we show that
$$\lim_{n\to\infty}\int_{-\infty}^{\infty}{\sin(n+0.5)x\over \sin(x/2)}\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}\tag1$$
$(1)$, substitution doesn't work, either integration by parts.
We know $(2)$
$$\int_{-\infty}^{\infty}{\mathrm dx\over 1+x^2}=\pi\tag2$$
$${\sin(n+0.5)x\over \sin(x/2)}={\sin(nx)\cos(x/2)+\sin(x/2)\cos(nx)\over \sin(x/2)}\tag3$$
Simplified to
$$=\sin(nx)\coth(x/2)+\cos(nx)\tag4$$
$$\lim_{n\to\infty}\int_{-\infty}^{\infty}\sin(nx)\cot(x/2)\cdot{\mathrm dx\over 1+x^2}+\int_{-\infty}^{\infty}\cos(nx)\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}\tag5$$
$$\lim_{n\to\infty}\int_{-\infty}^{\infty}\sin(nx)\cot(x/2)\cdot{\mathrm dx\over 1+x^2}+{\pi\over e^n}=\pi\cdot{e+1\over e-1}\tag6$$
$$\lim_{n\to\infty}\int_{-\infty}^{\infty}\sin(nx)\cot(x/2)\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}\tag7$$
I am not sure how to continue
REPLY [12 votes]: Here is one quick way. Note that
$${\sin(n+0.5)x\over \sin(x/2)}$$
is a Dirichlet kernel and we may use the identity (which can easily be proven)
$${\sin(n+0.5)x\over \sin(x/2)} = 1 + 2\sum_{k=1}^n \cos (kx)$$ to rewrite the integral as
\begin{align}
I&= \lim_{n\to\infty}\int_{-\infty}^{\infty}{\sin(n+0.5)x\over \sin(x/2)}\cdot{\mathrm dx\over 1+x^2}=\lim_{n\to\infty}\int_{-\infty}^{\infty}\left[1 + 2\sum_{k=1}^n \cos (kx)\right]\cdot{\mathrm dx\over 1+x^2}\\
&=\int_{-\infty}^{\infty}{\mathrm dx\over 1+x^2} + 2\sum_{k=1}^{\infty }\int_{-\infty}^{\infty}{\cos (kx) \over (1+x^2)}
\end{align}
The latter integral can be evaluated using residues to get
$$I = \pi + 2\pi \sum_{k=1}^{\infty }e^{-k}=\pi + \frac{2\pi}{e-1}=\pi \frac{e+1}{e-1}$$<|endoftext|>
TITLE: Derivative of $\int_{0}^{x} \sin(1/t) dt$ at $x= 0$
QUESTION [5 upvotes]: I've been trying to figure out how to evaluate
$$
\frac{d}{dx}\int_{0}^{x} \sin(1/t) dt
$$ at $x = 0$. I know that the integrand is undefined at $x = 0,$ but is there any way to "extend" the derivative to the point? Or is it not differntiable there - and if so, why?
REPLY [2 votes]: Use the function $g(x) =x^{2}\cos(1/x),g(0)=0$ so that $g$ is differentiable with $$g'(x) =2x\cos(1/x)+\sin(1/x),g'(0)=0$$ and hence upon integrating we get $$\frac{1}{x}\int_{0}^{x}\sin(1/t)\,dt=\frac{g(x)}{x}-\frac{2}{x}\int_{0}^{x}t\cos(1/t)\,dt$$ Taking limits as $x\to 0$ we can see that the RHS tends to $0$ so the desired derivative is $0$.<|endoftext|>
TITLE: Find all conditions for $x$ that the equation $1\pm 2 \pm 3 \pm 4 \pm \dots \pm n=x$ has a solution.
QUESTION [13 upvotes]: Find all conditions for $x$ so that the equation $1\pm 2 \pm 3 \pm 4 \pm \dots \pm 1395=x$ has a solution.
My attempt: $x$ cannot be odd because the left hand side is always even then we have $x=2k(k \in \mathbb{N})$ also It has a maximum and minimum
$1-2-3-4-\dots-1395\le x \le 1+2+3+4+\dots +1395$
But I can't show if these conditons are enough or make some other conditions.
REPLY [4 votes]: Let $K = \sum_{i=1}^n {k_i}i$ where $k_i \in \{1,-1\}$ be one of the expressible numbers and $M = \sum_{i=1}^n {m_i}i$ where $m_i \in \{1,-1\}$ be another.
$M - K = \sum_{i=1}^n (m_i - k_i) i = \sum_{i=1}^n [\{-2|0|2\}]i$ is an even number so all such numbers have the same parity.
Cleary any $K$ is such $-\frac{n(n+1)}2=- \sum i \le K \le \frac{n(n+1)}n$.
Let $K < \frac{n(n+1)}n$ so one of the ${k_i} = -1$. Let $j$ be so that ${k_{m }} = 1; \forall m < j$ but ${k_j} = -1$.
Let $\overline{K} = \sum {m_i}$ where $m_i = k_i$ for $i \ne j|j-i$; ${m_j} = 1$ (whereas ${k_j} = -1)$ and, if $j > 1$ then ${m_{j-1}} = -1$ whereas ${k_{j-1}} = 1$. Then $\overline {K} = K + 2j - 2(j-1) = K + 2$.
So via induction, all (and only) $K; -\frac{n(n+1)}2=- \sum i \le K \le \frac{n(n+1)}n; K $ of the same parity of $\frac{n(n+1)}n$ are possible.
So for $n = 1395$, all even numbers between $-\frac{1395*1396}2$ and $\frac{1395*1396}2$ are possible.<|endoftext|>
TITLE: Eisenbud 2.16 - units and nilpotents
QUESTION [5 upvotes]: This should be a pretty easy problem, but I'm a dummy so I'm stuck. Here's the statement:
Let $R$ be a $\mathbb Z$-graded ring, and $M$ a graded $R$-module, and let $x \in R_k$ for some non-zero integer $k$. Then $u = 1-x$ is not a zero divisor. Show that $u$ is a unit if and only if $x$ is nilpotent.
Now I know that a similar question has been asked here many times before, so let me say that I know how to show $u$ is not a zero divisor, and I can show that if $x$ is nilpotent, $u$ is a unit. This is easy and has been done on this site a million times. My struggle is in the converse, that is to say, if $u$ is a unit, then I want to prove that $x$ is nilpotent.
Apologies if this has also already been done on this site, but I can't seem to find the question on hand.
REPLY [4 votes]: Assume that $$(1-x)y=1$$ and let $y=\sum y_i$ be a sum of homogeneous elements $y_i$ then we have
$$\sum y_i-xy_i=1$$
Now we see that
$$y_0=1$$ and that
$$y_{i+k}=xy_i$$
Since the sum is finite, $x$ is nilpotent.
Note that if $-l$ is the smallest negative index where $y_{-l}$ is non zero then (assuming, $k$ positive by symmetry) we have
$$y_{-l}+xy_{-l}=0$$ and since these have different degrees
$y_{-l}=0$.<|endoftext|>
TITLE: Is this a construction of $E_8$?
QUESTION [7 upvotes]: Let $\{1, \omega, \omega^2\}$ be the three cube-roots of one.
Define the Eisenstein integers, $\Bbb{E}$, to be the $\Bbb{Z}$-linear combinations of $1$ and $\omega$. Note that $\omega^2 = -1-\omega \in \Bbb{E}$.
Let $\lambda = 1-\omega \in \Bbb{E}$. If we identify elements of $\Bbb{E}$ that differ by a multiple of $\lambda$, we obtain three equivalence classes, with representative elements $\{0, 1, -1\}$. Let $c: \Bbb{E} \to \{0,+, -\}$ be the classifier function that tells us which equivalence class a given integer belongs to.
Define the Tetracode, $T$, to be the following set (which is a perfect linear error-correcting code with Hamming distance 3):
$$\left\{\begin{matrix}
(0,0,0,0), & (0,+,+,+), & (0,-,-,-), \\
(+,0,+,-), & (+,+,-,0), & (+,-,0,+), \\
(-,0,-,+), & (-,+,0,-), & (-,-,+,0)
\end{matrix}\right\}$$
Let $E_8' = \left\{(w, x, y, z) \in \Bbb{E}^4 \mid (c(w), c(x), c(y), c(z)) \in T\right\}$. It is an 8-dimensional lattice, in which every point has 240 nearest neighbours. (24 of those neighbours are found by changing one coordinate, and 216 are found by changing three coordinates).
Is $E_8'$ equivalent to $E_8$?
REPLY [5 votes]: Short answer: yes, because $E_8$ is the unique eight-dimensional lattice with kissing number 240 (the largest possible). (Reference: Theorem 8, Chapter 14 by Bannai and Sloane, from Conway and Sloane, Sphere packings, lattices and groups, 1988.)
(Edit to add: your construction is an instance of what Sloane calls "Construction A for complex lattices", from Section 8, Chapter 7 of Conway and Sloane, whereby you can combine a length $n$ $q$-ary code with an index $q$ ideal of $\Bbb{E}$ or similar to form a complex $n$-dimensional lattice. Example 11b notes that the Tetracode yields $E_8$.)
If you want an explicit isomorphism, then write elements of $E_8'$ as
$v=(a,b,c,d,e,f,g,h)\in\Bbb{Z}^8$, representing
$(a+b\omega,c+d\omega,e+f\omega,g+h\omega)\in\Bbb{E}^4$. The triality vector
$(c(a+b\omega),c(c+d\omega),c(e+f\omega),c(g+h\omega))$ is then $(a+b,c+d,e+f,g+h)\in
T$, taking the co-ordinates modulo 3. Define
$$
M=\begin{pmatrix}
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 3 & 0 & 3 & 0 \\
0 & 0 & -2 & -2 & 1 & -2 & 1 & -2 \\
0 & 0 & -2 & 4 & 1 & -2 & 1 & -2 \\
0 & 0 & 4 & -2 & 1 & -2 & 1 & -2 \\
0 & 0 & 0 & 0 & 3 & 0 & -3 & 0 \\
-2 & -2 & 0 & 0 & 1 & -2 & -1 & 2 \\
-2 & 4 & 0 & 0 & 1 & -2 & -1 & 2 \\
4 & -2 & 0 & 0 & 1 & -2 & -1 & 2 \\
\end{array}
\end{pmatrix}
$$
Then, considering an element of $E_8'$ as a column vector in $\Bbb{Z}^8$ as described above, multiplying on the left by $\frac16 M$ will give an element of the standard version of $E_8$:
$$\Gamma_8 = \left\{(x_i) \in \mathbb Z^8 \cup (\mathbb Z + \tfrac{1}{2})^8 : {\textstyle\sum_i} x_i \equiv 0\!\!\pmod 2\right\}.$$
Of course this is one of many possible maps since $E_8$ has a large symmetry group, but I
don't think there is a much nicer form of the matrix. Maybe it is not so surprising that
$M$ will look a bit ragged, since the definition of the Tetracode used a particular
choice of (non-symmetric) basis and $M$ has to depend on this choice.
To check this works, we first need to see that $x=\frac16 Mv\in E_8$ for $v\in E_8'$. First
check that each component of $Mv$ is congruent to zero mod 3. Note that pairs of
entries in $M$ satisfy $M_{2i,j}\equiv M_{2i+1,j}\pmod 3$, which means that
$$6x_i\equiv M_{i,0}.c(a+b\omega)+M_{i,2}.c(c+d\omega)+M_{i,4}.c(e+f\omega)+M_{i,6}.c(g+h\omega)\pmod 3.$$
Modulo 3, the only values of $(M_{i,0},M_{i,2},M_{i,4},M_{i,6})$ that arise are
$(0,0,0,0)$, $(0,1,1,1)$, and $(1,0,1,2)$. Dotting these with a Tetracode vector,
$(c(a+b\omega),c(c+d\omega),c(e+f\omega),c(g+h\omega))$, gives zero modulo 3 (easily
checked; also follows from the fact that the Tetracode is self-dual). Thus
$6x_i\equiv0\pmod3$, and so $x_i\in\frac12\Bbb{Z}$.
To see $x_i\equiv x_j\pmod1$, note that each column of $M$ is constant modulo 2. Thus
$6x_i\equiv 6x_j\pmod2$, which is what we want since $6x_i$ and $6x_j$ are also zero modulo 3.
To complete the proof that $x\in E_8$, add the rows of $M$ to see that $$\sum_i x_i=\frac16 \sum_{ij} M_{ij}v_j=2(e-f)\equiv0\pmod2.$$
Finally, to show that the image $\frac16 M(E_8')$ is the whole of $E_8$, it's sufficient to
check that we can reach a basis of $E_8$.
$$\frac16 M\begin{pmatrix}
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & -1 & 2 & 0 \\
-1 & -1 & 1 & 1 & -1 & 0 & 0 & 0 \\
-1 & -1 & 2 & -1 & 0 & 0 & 0 & 0 \\
1 & -1 & 0 & 0 & 1 & -1 & 0 & 1 \\
0 & -1 & 0 & 0 & 1 & -1 & 0 & -1 \\
1 & -1 & 0 & 0 & -1 & 1 & 0 & 0 \\
0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 \\
\end{array}
\end{pmatrix}=
\begin{pmatrix}
\begin{array}{rrrrrrrr}
1 & -1 & 0 & 0 & 0 & 0 & 0 & 1/2 \\
1 & 1 & -1 & 0 & 0 & 0 & 0 & 1/2 \\
0 & 0 & 1 & -1 & 0 & 0 & 0 & 1/2 \\
0 & 0 & 0 & 1 & -1 & 0 & 0 & 1/2 \\
0 & 0 & 0 & 0 & 1 & -1 & 0 & 1/2 \\
0 & 0 & 0 & 0 & 0 & 1 & -1 & 1/2 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1/2 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1/2 \\
\end{array}
\end{pmatrix}
$$
The columns of the matrix on the left are all in $E_8'$, having triality vectors
$(0,1,1,1)$, $(0,1,1,1)$, $(0,0,0,0)$, $(0,0,0,0)$, $(0,2,2,2)$, $(1,0,1,2)$,
$(0,0,0,0)$, $(0,0,0,0)$ respectively. The columns of the matrix on the right form a
basis of $E_8$.<|endoftext|>
TITLE: What is the Mathematical Property that justifies equating coefficients while solving partial fractions?
QUESTION [9 upvotes]: The McGraw Hill PreCaculus Textbook gives several good examples of solving partial fractions, and they justify all but one step with established mathematical properties.
In the 4th step of Example 1, when going from:
$$1x + 13 = (A+B)x+(4A-5B)$$
they say to "equate the coefficients", writing the linear system
$$A+B = 1$$
$$4A-5B=13$$
It is a simple step, color coded in the textbook for easy understanding, but McGraw Hill does not justify it with any mathematical property, postulate or theorem. Addition and/or multiplication properties of equality don't seem to apply directly.
Can someone help me justify this step?!
REPLY [2 votes]: The general principle is: two polynomials are equal at every point if and only if their coefficients are equal. "If their coefficients are equal then the polynomials are equal" is clear.
Proving the reverse is not so easy in general. It follows from a stronger result from linear algebra, which says that the Vandermonde matrix for $d+1$ distinct real numbers is invertible, and so there is a unique polynomial of degree at most $d$ passing through any $d+1$ points, provided they all have different $x$ coordinates. This is probably not accessible to you at your level, but it is probably the best way to see it overall.
Another way to see it, though making this rigorous requires some calculus, is to note that if two polynomials are equal at each point, then their constant terms must be the same. Subtracting off the constant term from each and dividing by $x$, you have two polynomials that now again have to be equal at each point. So you plug in $x=0$, which gives agreement of the linear coefficients of the original polynomials. Doing this a total of $d+1$ times gives the desired result.
Where the lack of rigor comes in is in saying that $x/x=1$ even when $x=0$, which is not properly true. What we are really doing here is noticing that if two differentiable functions are equal everywhere then their derivatives are equal everywhere, and that if $p(x)=\sum_{k=0}^n a_k x^k$ then $a_k=\frac{p^{(k)}(0)}{k!}$, where $p^{(k)}$ denotes the $k$th derivative of $p$.<|endoftext|>
TITLE: Proof of the Reduced-to-Separated theorem
QUESTION [7 upvotes]: I'm trying to understand Vakil's proof of the Reduced-to-Separated theorem:
Theorem: Two $S$-morphisms $\pi:U\to Z$ and $\pi':U\to Z$ from a reduced scheme to a separated $S$-scheme agreeing on a dense open subset of $U$ are the same.
Proof: Let $V$ be the locus where $\pi$ and $\pi'$ agree (we have just proved that this exists). It is a closed subscheme of $U$ (because $Z$ is separable) which contains a dense open set. But the only closed subscheme of a reduced scheme $U$ whose underlying open set is dense is all of $U$.
The sentence I've written in bold is the sentence I don't understand. Vakil doesn't say anything further, and I don't remember ever proving this fact (though I may have missed it somewhere). Can somebody help me see why this is true?
REPLY [6 votes]: It is sufficient to show this for an affine scheme $U=Spec(R)$. If we have a closed subscheme $X=Spec(R/I)$, it is enough to show that $X$ doesn't contain some point in $U$, as then if $U'$ is an open dense subset of $X$, $\bar{U}=X\neq U$. Assume $X$ contains all points $p\in U$. Thus as ideals, $I\subset p$ $\forall p\in U$, thus $I$ is contained in the nilradical of $R$, so unless $I=0$ and $X=Spec(R)$, $U$ is nonreduced, a contradiction.<|endoftext|>
TITLE: Optimization problem: The curve with the minimum time to get through a pile of quicksand - Calculus of Variations
QUESTION [5 upvotes]: Suppose we have a function for the velocity given by $v(r,\theta)=r$, or in Cartesian form $v(x,y)=\sqrt{x^2+y^2}$. As we can see below, as we get closer to the origin $(0,0)$, the velocity decreases. I've found it easy to visualize the field as being some form of quicksand where it is harder to move through as you approach the origin.
This is demonstrated below by a plot I've made using Wolfram Mathematica:
What I am trying to do: Find the two functions for $y(x)$ which would minimize the time taken to get from point $A(-1,0)$ and $B(1,0)$.
I deduced that the fastest path cannot be the straight line directly from $A$ to $B$, since it would require an infinite time to get through the origin. A guess for the two curves is shown by the $\color{#0050B0}{\text{dark blue}}$ and the $\color{#00AAAA}{\text{light blue}}$ curves I've made. I'm almost certain they would be symmetrical. I've first guessed that the optimized curve would be similar to an ellipse, however I hesitated after I've plotted this.
I've done some research on the problem and figured it may be similar to the derivation of the Brachistochrone curve, using the Euler-Lagrange equations.
I am new to the Calculus of Variations, so here is the working I've done so far.
We have:
$$dt=\frac{ds}{v} \Rightarrow dt=\frac{\sqrt{dx^2+dy^2}}{\sqrt{x^2+y^2}} \Rightarrow dt=\frac{\sqrt{r^2+\left(\frac{dr}{d\theta}\right)^2}}{r}~d\theta$$
On the third step I converted it to polar coordinates. Adding integration signs:
$$\int_{0}^{T}~dt=\int_{\theta_1}^{\theta_2}\frac{\sqrt{r^2+\left(\frac{dr}{d\theta}\right)^2}}{r}~d\theta$$
$$T=\int_{\theta_1}^{\theta_2} \sqrt{1+\frac{(r')^2}{r^2}}~d\theta$$
Where $T$ is the total time taken to get from $A$ to $B$. I thought of using the following Euler-Lagrange Equation:
$$\frac{d}{d\theta}\left(\frac{\partial L}{\partial r'}\right)=\frac{\partial L}{\partial r} \tag{1}$$
For the functional:
$$L(\theta,r,r')=\sqrt{1+\frac{(r')^2}{r^2}}$$
Evaluating partial derivatives:
$$\frac{dL}{dr}=-\frac{(r')^2}{r^3\sqrt{\frac{(r')^2}{r^2}+1}}=-\frac{(r')^2}{r^2\sqrt{(r')^2+r^2}}$$
$$\frac{dL}{dr'}=\frac{r'}{r^2\sqrt{\frac{(r')^2}{r^2}+1}}=\frac{r'}{r\sqrt{(r')^2+r^2}}$$
Substituting into $(1)$:
$$\frac{d}{d\theta}\left(\frac{r'}{r\sqrt{(r')^2+r^2}}\right)=-\frac{(r')^2}{r^2\sqrt{(r')^2+r^2}}$$
I integrated both sides with respect to $\theta$ and obtained:
$$\frac{r'}{r\sqrt{(r')^2+r^2}}=-\frac{(r')^2\theta}{r^2\sqrt{(r')^2+r^2}}+C \tag{2}$$
Now, I realize I must solve this differential equation. I've tried simplifying it to obtain:
$$r\frac{dr}{d\theta}=-\left(\frac{dr}{d\theta}\right)^2\theta+Cr^2\sqrt{\left(\frac{dr}{d\theta}\right)^2+r^2} \tag{3}$$
However, I think I've hit a dead end. I'm not certain that it is solvable in terms of elementary functions. Both Mathematica and Wolfram|Alpha have not given me a solution to this differential equation.
To conclude, I would like some guidance on how to continue solving the differential equation, assuming I have done the calculation and methodology correctly so far. If I have not done the correct methodology, I would appreciate some guidance on how to proceed with the problem.
REPLY [2 votes]: Just to show that this can be done using the calculus of variations: start with your functional
$$
L(\theta,r,r')=\sqrt{1+\frac{(r')^2}{r^2}}
$$
Now, we have that $\partial L/\partial \theta = 0$, which implies (via the Beltrami identity) that the quantity
$$
L - r' \frac{\partial L}{\partial r'} = C
$$
where $C$ is a constant with respect to $\theta$. In your case, this implies that
$$
C = \sqrt{1+\frac{(r')^2}{r^2}} - r' \frac{r'/r^2}{\sqrt{1+\frac{(r')^2}{r^2}}} = \frac{1}{\sqrt{1+\frac{(r')^2}{r^2}}}
$$
Re-arranging, we find that
$$
\frac{r'}{r} = \sqrt{\frac{1}{C^2} - 1}
$$
which itself is another constant $D$; thus, we have $r' = D r$, or $r = e^{D\theta}$ for some constant $D$.<|endoftext|>
TITLE: How to show this inequality $\sum\sqrt{\frac{x}{x+2y+z}}\le 2$
QUESTION [6 upvotes]: Let $x,y,z,w>0$ show that
$$\sqrt{\dfrac{x}{x+2y+z}}+\sqrt{\dfrac{y}{y+2z+w}}+\sqrt{\dfrac{z}{z+2w+x}}+\sqrt{\dfrac{w}{w+2x+y}}\le 2$$
I tried C-S, but without success.
REPLY [2 votes]: By C-S:
$(LHS)^2\le \sum_{cyc}a(b+2c+d) \sum_{cyc}\frac{1}{(a+2b+c)(b+2c+d)}$
$\sum_{cyc}a(b+2c+d) \sum_{cyc}\frac{1}{(a+2b+c)(b+2c+d)}\le 4 \ \ \iff \ \ (a-c)^2(b-d)^2\ge 0$<|endoftext|>
TITLE: smoothness of solution to heat equation + differentiation under integral sign
QUESTION [6 upvotes]: I am reading Evan's PDE book, and I need some help understanding the following result
[Theorem 1 on pg47 of the book]
Let $g$ be continuous and essentially bounded function on $\mathbb{R}^n$ and let $K$ be the heat kernel. Then, the function $u$ which is a convolution of $g$ and $K$ is $C^{\infty}$.
The proof of this theorem goes as:
Since $K$ is infinitely differentiable, with uniformly bounded derivatives of all orders, on $[\delta, \infty)$ for each $\delta > 0$, we see that $u$ is $C^{\infty}$
I am not really understanding this proof.
(1) Am I correct that the uniform boundedness of derivatives of all orders means that:
There exists a constant $M$ such that for every non-negative integer $\alpha$ and multi-index $\beta$, $|\frac{\partial^\alpha}{\partial t^{\alpha}} D^{\beta} K(x,t)| \leq M$ for every $x$ and $t$ in $\mathbb{R}^n \times [\delta, \infty)$?
If so, how do I know that the derivatives are uniformly bounded? I know that each derivative is bounded since $t\geq \delta > 0 $, but how do I show the existence of the uniform constant $M$?
(2) Why does the uniform boundedness of all derivatives allow us to differentiate under the integral sign? If I let $\Delta f_h$ to denote the difference quotient for the corresponding derivative $Df$, then I can write
$$\int |\Delta f_h(x-y)| g(y) dy = \int |Df(x-y+c)| g(y) dy$$ for some $c$ between $x-y$ and $x-y+h$ by the mean value theorem, so I have as my dominating function $M |g(y)|$ where $M$ is the uniform bound constant, but since $g$ is only bounded and not necessarily integrable, so I cannot apply the dominated convergence theorem. What am I doing wrong?
It was never intuiviely clear to me when differentiation under the integral is allowed and when it is not allowed. For example,
(3) suppose that I have a function $f(x,y)$ in $\mathbb{R}^2$ and assume further that we know $\frac{\partial}{\partial x} f(x,y)$ exists and is integrable in $y$ over $\mathbb{R}$. Then, is it always the case
$\dfrac{d}{dx} \int_{\mathbb{R}} f(x,y)dy = \int_{\mathbb{R}} \frac{\partial}{\partial x} f(x,y) dy$?
REPLY [6 votes]: Let me give a simple example to illustrate what is required: Consider
$$g(x) = \int_{-\infty}^\infty f(x,y) \, dy.$$
The question is when can we exchange the derivative and integral, so that
$$g'(x) = \int_{-\infty}^\infty f_x(x,y) \, dy.$$
Taking difference quotients we have
$$\frac{g(x+h) - g(x)}{h} = \int_{-\infty}^\infty \frac{f(x+h,y)-f(x,y)}{h} \, dy.$$
So the question is really when can we exchange the limit as $h \to 0$ with the integral. To use the dominated convergence theorem, we need a dominating function that is integrable, so we need that for some $\delta>0$ there exists an integrable function $g(y)$ such that
$$\left|\frac{f(x+h,y)-f(x,y)}{h}\right| \leq g(y) \ \ \text{ for all } |h|<\delta.$$
By the mean value theorem
$$f(x+h,y) - f(x,y) = hf_x(z,y)$$
for some $z$ between $x$ and $x+h$. So it is enough to assume that for some $\delta>0$ there exists an integrable function $g(y)$ such that
$$|f_x(z,y)| \leq g(y) \ \ \text{for all } z \text{ with } |z-x|\leq \delta.$$
This condition is often called uniform integrability of $f_x$, meaning that $f_x$ is integrable uniformly in its first argument.
All derivatives of the heat kernel satisfy the uniform integrability property as long as you restrict $t$ away from zero. This is what Evans means when he says "uniform boundedness".<|endoftext|>
TITLE: A closed form for a triple integral with sines and cosines
QUESTION [18 upvotes]: $$\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin(x)\sin(y)\sin(z)}{xyz(x+y+z)}(\sin(x)\cos(y)\cos(z) + \sin(y)\cos(z)\cos(x) + \sin(z)\cos(x)\cos(y))\,dx\,dy\,dz$$
I saw this integral $I$ posted on a page on Facebook . The author claims that there is a closed form for it.
My Attempt
This can be rewritten as
$$3\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z)}{xyz(x+y+z)}\,dx\,dy\,dz$$
Now consider
$$F(a) = 3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz(x+y+z)}\,dx\,dy\,dz$$
Taking the derivative
$$F'(a) = -3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz}\,dx\,dy\,dz$$
By symmetry we have
$$F'(a) = -3\left(\int^\infty_0 \frac{\sin^2(x)e^{-ax}}{x}\,dx \right)\left( \int^\infty_0 \frac{\sin(x)\cos(x)e^{-ax}}{x}\,dx\right)^2$$
Using W|A I got
$$F'(a) = -\frac{3}{16} \log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)$$
By integeration we have
$$F(0) = \frac{3}{16} \int^\infty_0\log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)\,da$$
Let $x = 2/a$
$$\tag{1}I = \frac{3}{8} \int^\infty_0\frac{\log\left(x^2+1 \right)\arctan^2\left(x\right)}{x^2}\,dx$$
Question
I seem not be able to verify (1) is correct nor find a closed form for it, any ideas ?
REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
Given the 'cyclic symmetry' of your integral $\ds{\color{#f00}{\mc{J}}}$, it's equivalent to
\begin{align}
\color{#f00}{\mc{J}} & \equiv 3\int_{0}^{\infty}\!\!\int_{0}^{\infty}\!\!\int_{0}^{\infty}\!\!
\mrm{sinc}\pars{x}\mrm{sinc}\pars{y}\mrm{sinc}\pars{z}\sin\pars{x}\cos\pars{y}
\cos\pars{z}\ \times
\\[3mm] & \phantom{\equiv 3}
\underbrace{\bracks{\int_{0}^{\infty}\expo{-\pars{x + y + z}t}\,\dd t}}
_{\ds{1 \over x + y + z}}\dd x\,\dd y\,\dd z =
3\int_{0}^{\infty}\mrm{f}\pars{t}\mrm{g}^{2}\pars{t}\,\dd t
\\[5mm] & \mbox{where}\qquad
\left\{\begin{array}{rcl}
\ds{\mrm{f}\pars{t}} & \ds{=} &
\ds{\Im\mc{I}\pars{t}}
\\[2mm]
\ds{\mrm{g}\pars{t}} & \ds{=} &
\ds{\Re\mc{I}\pars{t}}
\\[2mm]
\ds{\mc{I}\pars{t}} & \ds{\equiv} &
\ds{\int_{0}^{\infty}\mrm{sinc}\pars{x}\expo{-\pars{t - \ic}x}\dd x}
\end{array}\right.\label{1}\tag{1}
\end{align}
Then
\begin{align}
\left.\mc{I}\pars{t}\right\vert_{\large\ \color{#f00}{t\ >\ 0}} & \equiv
\int_{0}^{\infty}\mrm{sinc}\pars{x}\expo{-\pars{t - \ic}x}\dd x =
\int_{0}^{\infty}
\overbrace{\pars{{1 \over 2}\int_{-1}^{1}\expo{\ic kx}\,\dd k}}
^{\ds{\mrm{sinc}\pars{x}}}\ \expo{-\pars{t - \ic}x}\dd x
\\[5mm] & =
{1 \over 2}\int_{-1}^{1}\int_{0}^{\infty}\expo{-\pars{t - \ic - \ic k}x}
\dd x\,\dd k =
{1 \over 2}\int_{-1}^{1}{\dd k \over t - \ic - \ic k} =
{1 \over 2}\int_{-1}^{1}{t + \pars{k + 1}\ic \over
\pars{k + 1}^{2} + t^{2}}\,\dd k
\\[5mm] & =
{1 \over 2}\int_{0}^{2}{t + k\ic \over k^{2} + t^{2}}\,\dd k =
{1 \over 2}\int_{0}^{2/t}{1 + k\ic \over k^{2} + 1}\,\dd k =
{1 \over 2}\arctan\pars{2 \over t} +
{1 \over 4}\ln\pars{{4 \over t^{2}} + 1 }\ic
\end{align}
$\ds{\color{#f00}{\mc{J}}}$ becomes ( see \eqref{1} ):
\begin{align}
\color{#f00}{\mc{J}} & =
3\int_{0}^{\infty}\bracks{{1 \over 4}\ln\pars{{4 \over t^{2}} + 1}}
\bracks{{1 \over 2}\arctan\pars{2 \over t}}^{2}\dd t
\\[5mm] & \stackrel{2/t\ \mapsto\ t}{=}\,\,\,
{3 \over 16}\int_{\infty}^{0}\ln\pars{t^{2} + 1}
\arctan^{2}\pars{t}\pars{-\,{2\,\dd t \over t^{2}}} =
{3 \over 8}\ \underbrace{\int_{0}^{\infty}
{\ln\pars{t^{2} + 1}\arctan^{2}\pars{t} \over t^{2}}\,\dd t}
_{\ds{{\Large\color{#f00}{\S}}: {\pi^{3} \over 12} + \pi\ln^{2}\pars{2}}}
\end{align}
$\ds{{\Large\color{#f00}{\S}}}$: The integral was already evaluated in the
$\texttt{@Zaid Alyafeai}$ fine answer.
Finally, the answer to the proposed OP integral is given by
$$
\bbox[15px,#ffe,border:1px dotted navy]{\ds{{\color{#f00}{\mc{J}} =
{\pi^{3} \over 32} + {3 \over 8}\,\pi\ln^{2}\pars{2}}}} \approx 1.5350
$$<|endoftext|>
TITLE: Find the polynomials which satisfy the condition $f(x)\mid f(x^2)$
QUESTION [5 upvotes]: I want find the polynomials which satisfy the condition
$$f(x)\mid f(x^2).$$
I want to find such polynomials with integer coefficients, real number coefficients and complex number coefficients.
For example, $x$ and $x-1$ are the linear polynomials which satisfy this condition.
Here is one way to find the $2$-degree polynomials with integer coefficients.
Let the quadratic be $p=ax^2+bx+c$, so its value at $x^2$ is $q=ax^4+bx^2+c$. If $p$ is to be a divisor of $q$ let the other factor be $dx^2+ex+f.$ Equating coefficients gives equations
[1] $ad=a,$
[2] $ae+bd=0,$
[3] $af+be+cd=b,$
[4] $bf+ce=0,$
[5] $cf=c.$
Now we know $a,c$ are nonzero (else $p$ is not quadratic, or is reducible). So from [1] and [5] we have $d=f=1.$ Then from [2] and [4] we obtain $ae=ce.$ Here $e=0$ leads to $b=0$ from either [2] or [4], and [3] then reads $a+c=0$, so that $p=a(x^2-1)$ which is reducible. So we may assume $e$ is nonzero, and also $a=c.$
At this point, [2] and [4] say the same thing, namely $ae+b=0.$ So we may replace $b=-ae$ in [3] (with its $c$ replaced by $a$) obtaining
$a+(-ae)e+a=-ae,$ which on factoring gives $a(2-e)(e+1)=0.$ The possibility $e=2$ then leads after some algebra to $2a+b=0$ and $p=a(x-1)^2$ which is reducible, while the possibility $e=-1$ leads to $a=b$ and then $p=ax^2+ax+a$ as claimed.
Should we list out all the irreducible degree polynomials and then check if these polynomials satisfy the condition
$x$
$x+1$
$x^2 + x + 1$
$x^3 + x^2 + 1$
$x^3 + x + 1$
$ x^4 + x^3 + x^2 + x + 1 $
$ x^4 + x^3 + 1 $
$ x^4 + x + 1 $
With the real number coefficients which can be factored into
$$(x-c_1)(x-c_2)\cdots(x^2-2a_1x-(a_1^2+b_1^2))(x^2-2a_2x-(a_2^2+b_2^2))\cdots$$ If all of these linear terms and quadratic terms satisfy $$f(x)\mid f(x^2),$$ this polynomial satisfy too? So what's pattern in the real number polynomials?
REPLY [4 votes]: The polynomials with $f(x)\mid f(x^2)$ are closed under multiplication. In fact, if $f$ is any such polynomial and $g(x)\mid f(x^2)/f(x)$, then $f(x)g(x)$ is also such a polynomial.
WLOG assume $x\nmid f(x)$. The relation $f(x)\mid f(x^2)$ implies
$$ \{\alpha:f(\alpha)=0\}\subseteq \{\beta:f(\beta^2)=0\}=\{\pm\sqrt{\beta}:f(\beta)=0\}.$$
Let $\alpha$ be a zero. Then $\alpha=\pm\sqrt{\beta}$ for some other zero $\beta$, or equivalently $\alpha^2=\beta$. Put another way, the square of any zero is also a zero, so the set of zeros is closed under squaring. Therefore we have a sequence of zeros $\alpha,\alpha^2,\alpha^4,\cdots$ which must eventually terminate since $f$ has finitely many zeros, in which case $\alpha^{2^n}=\alpha^{2^m}$ eventually, so $\alpha^{2^r(2^s-1)}=1$ and thus $\alpha$ is a root of unity.
We can restrict our attention to $f$ that cannot be written as a nontrivial product of other polynomials with this property. I don't think there's a very nice characterization of the possible set of zeros of $f$ beyond "start with a root of unity and keep squaring until you get a repeat." For example, over $\mathbb{C}$ we have that $f(x)=(x-i)(x+1)(x-1)$ is such a polynomial; it includes a kind of "cycle" of length two $\{-1,1\}$ in its zero set, but it also has a kind of "hangnail" at the front, namely $i$. If we think about this in terms of integers mod $n$, we can write $n=2^km$ and use the Chinese Remainder Theorem to track what $x\mapsto 2x$ does to an integer mod $n$; the sequence is eventually periodic but at the beginning the $\mathbb{Z}/2^k\mathbb{Z}$ coordinate may be nonzero.
To get the $f$ with real coefficients, just make sure the set $\{\alpha,\alpha^2,\alpha^4\cdots\}$ is closed under conjugation; if it isn't, then adjoin all their conjugates to construct an $f$ with real coefficients.
And to get $f$ with integer coefficients, if $f$ has an $n$th root of unity as a zero then $f$ is divisible by the cyclotomic polynomial $\Phi_n(x)$. If $n$ is even, then squaring primitive $2n$th roots of unity yield primitive $n$th roots of unity, meaning both $\Phi_{n}(x)$ and $\Phi_{n/2}(x)$ are factors. Writing $n=2^km$, this means it is divisible by $\Phi_{2^km}(x)\Phi_{2^{k-1}m}(x)\cdots\Phi_m(x)$. One can check these polynomials satisfy the condition.<|endoftext|>
TITLE: Asymptotic estimation problem about $\sum\limits_{j = 1}^n {\sum\limits_{i = 1}^n {\frac{{i + j}}{{{i^2} + {j^2}}}} } $
QUESTION [8 upvotes]: How to get$$\mathop {\lim }\limits_{n \to \infty } n\left( {\frac{\pi }{2} + \ln 2 - \frac{1}{n}\sum\limits_{j = 1}^n {\sum\limits_{i = 1}^n {\frac{{i + j}}{{{i^2} + {j^2}}}} } } \right).$$
I think we can use Euler–Maclaurin formula$$\sum_{n=a}^b f(n) \sim \int_a^b f(x)\,\mathrm{d}x + \frac{f(b) + f(a)}{2} + \sum_{k=1}^\infty \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\right),$$
where $a,b$ are both integers. But it seems difficult because of the double summation!
REPLY [9 votes]: Actually, for $S_n=\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{i+j}{i^2+j^2}$, the limit $S=\displaystyle\lim_{n\to\infty}\left[\left(\frac{\pi}{2}+\ln 2\right)n\color{red}{-\ln n}-S_n\right]$ exists.
This limit is seen as $1+\displaystyle\lim_{n\to\infty}(I_n-S_n)$, where
$$\begin{align*}I_n&=\iint_{[\frac{1}{2},n+\frac{1}{2}]^2}\frac{x+y}{x^2+y^2}\,dx\,dy\\ &=(2n+1)\ln(2n+1)\\ &-(n+1)\ln(2n^2+2n+1)\\ &+n(2\arctan(2n+1)-\pi/2).\end{align*}$$
To prove existence, note that $I_n-S_n=\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}\Delta_{i,j}$, where
$$\begin{gather}\Delta_{i,j}=\iint_{[-\frac{1}{2},\frac{1}{2}]^2}\big(f(i+x,j+y)-f(i,j)\big)\,dx\,dy,\quad f(x,y)=\frac{x+y}{x^2+y^2};\\ \frac{1}{n!}\left|\frac{\partial^{n}\!f}{\partial x^k\partial y^{n-k}}\right|\leqslant\left|\frac{x+y}{x^2+y^2}\right|^{n+1},\quad \frac{\partial^2\!f}{\partial x^2}+\frac{\partial^2\!f}{\partial y^2}=0,\end{gather}$$
and Taylor's theorem produces $|\Delta_{i,j}|=\mathcal{O}\left(\big(\frac{i+j}{i^2+j^2}\big)^5\right)$ which is sufficient.
Computing $S$ using (its definition and) Lagrange-Zagier extrapolation, I get
$$ \color{blue}{S=1.0042628439817233943074076864477736788445647263436\ldots} $$
I wonder whether $S$ is related to some known mathematical constants...
The higher-order asymptotics can indeed be derived from Euler-Maclaurin formula (or its two-dimensional extension, but I'm going the elementary way below). Let
$$ \begin{gather}S_n\asymp Kn-\ln n-S-\sum_{k=1}^{(\infty)}\frac{a_k}{n^k},\quad K=\frac{\pi}{2}+\ln 2;\\ S_n-S_{n-1}\asymp K+\ln\Big(1-\frac{1}{n}\Big)+\sum_{k=2}^{(\infty)}n^{-k}\sum_{r=1}^{k-1}\binom{k-1}{r-1}a_r.\end{gather} $$
Now we apply E.-M. to $f(x)=2\dfrac{1+x}{1+x^2}, x\in[0,1]$. We have $\displaystyle\int_0^1 f(x)\,dx=K$,
$$ \begin{gather}\frac{1}{n}\left(\frac{f(0)+f(1)}{2}+\sum_{k=1}^{n-1}f\Big(\frac{k}{n}\Big)\right)=\frac{1}{n}+S_n-S_{n-1},\\ \frac{f^{2k-1}(0)}{(2k)!}=\frac{(-1)^{k-1}}{k},\quad\quad\frac{f^{2k-1}(1)}{(2k)!}=-\frac{(-1)^{\lfloor k/2\rfloor}}{2^k\cdot k}\end{gather} $$
(the last two e.g. from power series); Euler-Maclaurin gives
$$ S_n-S_{n-1}\asymp K-\frac{1}{n}-\sum_{k=1}^{(\infty)}\frac{c_k}{n^{2k}},\quad c_k=\frac{B_{2k}}{k}\Big((-1)^{k-1}+2^{-k}(-1)^{\lfloor k/2\rfloor}\Big). $$
Thus,
$$ \sum_{r=1}^{k-1}\binom{k-1}{r-1}a_r=b_k=\frac{1}{k}-\begin{cases}c_{k/2}&(k\text{ even})\\ 0 &(k\text{ odd})\end{cases}\quad(k>1) $$
and, recognizing the inverse matrix for this system here, we finally get
$$ a_n=\frac{1}{n}\sum_{k=1}^{n}\binom{n}{k}B_{n-k}b_{k+1}. $$
This sequence begins with
$$ \frac{1}{4}, \frac{1}{24}, -\frac{7}{144}, \frac{3}{160}, 0, -\frac{1}{2016}, -\frac{19}{2688}, \frac{31}{3840}, 0, \frac{1}{4224}, -\frac{1453}{59136}, \frac{29713}{698880}, 0, \ldots $$
(this coincides with the values computed numerically in a prior edition of this answer).<|endoftext|>
TITLE: If a triangle ABC has sides $a,b,c$ in A.P then what is the largest possible value of $\angle B$.
QUESTION [6 upvotes]: It was easy to see that for an equilateral triangle maximum value of $\angle B$ can be $\angle B=60^\circ$ as for other triangles $\angle C$ or $\angle A$ would have the largest angle, depending on the common difference of the sides.
But how do I prove it using trigonometry, geometry or even calculus
I tried taking sides as $a-d,\; a,\; a+d$ and applying $Law \;of\; sines\;$ but couldn't get the result.
REPLY [4 votes]: Let the sides be $b-d, b, b+d$. To form a triangle, one needs $d
TITLE: Convergence of $1 +\frac{1}{5}+\frac{1}{9} +\frac{1}{13}+\dots$
QUESTION [5 upvotes]: This is probably incredibly simple, but we've just started the topic, and we've just gone over geometric series, p-series, and harmonic series. Its simple when the series is given explicitly in sigma-notation, but I struggle when they don't give you the form and just give you the first few numbers. The question exactly is:
Determine whether the following series converges or diverges. Give a reason for your answer.
$$1+\frac15+\frac19+\frac1{13}\dots$$
Any tips/hints/help would be much appreciated.
REPLY [4 votes]: You could also observe that
$$1 +\frac{1}{5}+\frac{1}{9}\cdots \geq 1 + \frac{1}{5}\left (1 +\frac{1}{2}+\frac{1}{3} +\cdots\right)$$
The term inside brackets is the harmonic series which....<|endoftext|>
TITLE: Evaluate $\prod_{n=1}^\infty \frac{2^n-1}{2^n}$
QUESTION [6 upvotes]: Is there a closed form expression for this limit?
$$\prod_{n=1}^\infty \frac{2^n-1}{2^n}$$
Wolfram Alpha says $0.2887880950866024212788997219292307800889\dots$
and the Inverse Symbol Calculator found nothing but the above expression.
REPLY [8 votes]: It is equal to $\phi(1/2)$ where $\phi(q)$ is the Euler function, defined by
$$
\phi(q)=\prod_{n=1}^{\infty}(1-q^n).
$$
This is closely related to the $q$-Pochhammer symbol as well.
From Euler's pentagonal number theorem one obtains the following rapidly convergent binary expansion for $\phi(1/2)$:
$$
\phi(1/2)=\sum_{n=-\infty}^{\infty}(-1)^n2^{(-3n^2+n)/2},
$$
that is,
$$
\phi(1/2)=1-\frac{1}{2}-\frac{1}{2^2}+\frac{1}{2^5}+\frac{1}{2^7}-\frac{1}{2^{12}}-\frac{1}{2^{15}}+\cdots
$$
with the signs repeating in the pattern $-,-,+,+$ and the exponents growing quadratically.
Proof of transcendence.
As pointed out by P. Singh in the comments above, $\phi(1/2)$ is known to be transcendental. This follows from results established in
Nesterenko, Yu. V. (1996), Modular functions and transcendence questions, Mat. Sb. p. 65-96 MR1422383.
Since this article does not have open access, I am posting the statement of the main theorem below.
We use the following identity (whose proof is indicated below)
$$
\phi(q)^{24}=\frac{Q(q)^3-R(q)^2}{1728q}
$$
to observe that, if $\phi(1/2)$ was algebraic, then $Q(1/2)$ and $R(1/2)$ would be algebraically dependent, contradicting the theorem. Thus $\phi(1/2)$ is transcendental, as claimed.
Proof of the identity: This is equivalent to a well-known identity expressing the modular discriminant in terms of Eisenstein series.<|endoftext|>
TITLE: What is $x$, if $3^x+3^{-x}=1$?
QUESTION [6 upvotes]: I came across a really brain-racking problem.
Determine $x$, such that $3^x+3^{-x}=1$.
This is how I tried solving it:
$$3^x+\frac{1}{3^x}=1$$
$$3^{2x}+1=3^x$$
$$3^{2x}-3^x=-1$$
Let $A=3^x$.
$$A^2-A+1=0$$
$$\frac{-1±\sqrt{1^2-4\cdot1\cdot1}}{2\cdot1}=0$$
$$\frac{-1±\sqrt{-3}}{2}=0$$
I end up with
$$\frac{-1±i\sqrt{3}}{2}=0$$
which yields no real solution. And this is not the expected answer.
I'm a 7th grader, by the way. So, I've very limited knowledge on mathematics.
EDIT
I made one interesting observation.
$3^x+3^{-x}$ can be the middle term of a quadratic equation:
$$3^x\cdot\frac{1}{3^x}=1$$
$$3^x+3^{-x}=1$$
REPLY [4 votes]: Just building upon previous comments, it doesn't have as you pointed out a Real result but the Imaginary solution can be analytically found as:
$3^x = \dfrac{1\pm\sqrt{3} i}{2}$
Reexpressing rhs in polar notation:
$3^x = e^{i \dfrac{\pi}{3}}$
And changing lhs basis to $e$
$3^x = e^{\ln{3^x}} = e^{x\ln{3}} $
Then:
$\boxed{x = i \dfrac{\pi}{3\ln{(3)}}}$
Note: this is the principal value solution. Due to the periodicity of the function, any $x = i \dfrac{\pi}{3\ln{(3)}} + i \dfrac{2\pi n}{\ln{3}}$, for $n\in \mathbb{Z}$ will also be a solution. Also the 2nd quadrant values need to be considered $x = - i \dfrac{\pi}{3\ln{(3)}} + i \dfrac{2\pi n}{\ln{3}}$<|endoftext|>
TITLE: Is $1$ a limit point of the fractional part of $1.5^n$?
QUESTION [16 upvotes]: It is an open problem whether the fractional part of $\left(\dfrac32\right)^n$ is dense in $[0...1]$.
The problem is: is $1$ a limit point of the above sequence?
An equivalent formulation is: $\forall \epsilon > 0: \exists n \in \Bbb N: 1 - \{1.5^n\} < \epsilon$ where $\{x\}$ denotes the fractional part of $x$.
Here is a table of $n$ against $\epsilon$ that I computed:
$\begin{array}{|c|c|}\hline
\epsilon & n \\\hline
1 & 1 \\\hline
0.5 & 5 \\\hline
0.4 & 8 \\\hline
0.35 & 10 \\\hline
0.3 & 12 \\\hline
0.1 & 14 \\\hline
0.05 & 46 \\\hline
0.01 & 157 \\\hline
0.005 & 163 \\\hline
0.001 & 1256 \\\hline
0.0005 & 2677 \\\hline
0.0001 & 8093 \\\hline
0.00001 & 49304 \\\hline
0.000005 & 158643 \\\hline
0.0000005 & 835999 \\\hline
\end{array}$
References
Unsolved Problems, edited by O. Strauch, in section 2.4 Exponential sequences it is explicitly mentioned that both questions whether $(3/2)^n\bmod 1$ is dense in $[0,1]$ and whether it is uniformly distributed in $[0,1]$ are open conjectures.
Power Fractional Parts, on Wolfram Mathworld, "just because the Internet says so"
REPLY [6 votes]: Another comment, but too big for the standard box. An atanh() rescaling might be an interesting thing, see my example:
The pink and the blue lines are hullcurves connecting the points $\small (N,f(N))$ where $f(N)$ is extremal (with moving maxima/minima) and the grey dots are points $\small (N,f(N))$ at $\small N \le 1000 $ which shall illustrate the general random distribution of the $\small f(N)$.
The grey lines are manually taken smooth subsets of the extremaldata and symmetrized (by merging of datasets and adapting sign) to show the rough tendency of extension of the vertical intervals.
I liked it that the atanh()-scaling seem to suggest some roughly linear increase/decrease of the hullcurves.
[update] The data for the picture were extended by data from the OP and OEIS A153663 (magenta upper curve) and from the OEIS A081464 (blue lower curve). Note, that the OEIS has even more datapoints, but that needed excessive memory/time to compute the high powers of (3/2) and its fractional parts.<|endoftext|>
TITLE: Limit of a monotonically increasing sequence and decreasing sequence
QUESTION [9 upvotes]: If a sequence ($a_n$) is monotonically increasing, and ($b_n$) is a decreasing sequence, with $\lim_{n\to\infty}\,(b_n-a_n)=0$, show that $\lim a_n$ and $\lim b_n$ both exist, and that $\lim a_n=\lim b_n$.
My attempt:
To show that the limits of both sequences exist, I think I should be using the Monotone Convergence Theorem (MCT). For that I would need to show that the sequences are bounded.
($a_n$) is increasing, and so it should be bounded below. ($b_n$) is decreasing, so it should be bounded above. The challenge here is to show that ($a_n$) can be bounded above and ($b_n$) can be bounded below. This should utilise the third condition, from which I get:
$$\begin{align*}
& \lim_{n\to\infty}\,(b_n-a_n)=0 \\[3pt]
\iff & \forall\varepsilon>0,\ \exists N\in \mathbb{N} \text{ s.t. } \forall n\geq N,\ |{b_n-a_n}|<\varepsilon
\end{align*}$$
I then tried using the triangle inequality:
$$ |b_n|-|a_n|\leq|b_n-a_n|<\varepsilon$$
but I'm not sure where to go from here.
REPLY [2 votes]: Since $\lim_{n\to\infty}(b_n-a_n)=0$, there is an $N$ such that $|a_n-b_n|<1$ for all $n\ge N$. ($1$ is a number that I have just chosen for $\varepsilon$.) Since $b_n$ is decreasing, we have $a_n
TITLE: Irrationality of $\sum\limits_{n=1}^{\infty} r^{-n^{2}}$ for every integer $r > 1$
QUESTION [11 upvotes]: In the preface to Introduction to Algebraic Independence Theory Yuri V. Nesterenko mentions the series $$f(r) = \sum_{n=1}^{\infty} \frac {1}{r^{n^{2}}}$$ which was introduced as an example by Joseph Liouville in 1851, who proved that $f(r) $ is irrational for all integers $r>1$.
It appears that the proof is elementary like Liouville's proofs for irrationality of $e^{2}$ and $e^{4}$ discussed in my blog posts. Is there any simple way to prove the irrationality of $f(r) $? Or perhaps a reference regarding Liouville's proof?
REPLY [3 votes]: I found this proof which sounds like something Liouville would have done. Let:
$$\mathcal{L}=\sum_{h=1}^\infty r^{-h^2}$$
$$\frac p{r^{n^2}} = \sum_{h=1}^n r^{-h^2}$$
$$r^{-(x-1)^2}\geq r^{\lfloor x \rfloor ^2}$$
$$\int_n^\infty r^{-(x-1)^2}dx\geq\int_n^\infty r^{\lfloor x \rfloor ^2}dx=\sum_{h=n}^\infty r^{-h^2}$$
$$\ln(r)^{-1/2}\int_{n\ln(r)^{1/2}}^\infty e^{-y^2}dy\geq \sum_{h=n+1}^\infty r^{-h^2}=\mathcal{L}-\frac p{r^{n^2}}$$
This limit is due to wolfram alpha:
$$\lim_{n\to\infty}r^{n^2}\int_{n\ln(r)^{1/2}}^\infty e^{-y^2}dy=\lim_{x\to\infty}e^{x^2}\int_x^\infty e^{-y^2}dy=0$$
Then $$r^{n^2}\left(\mathcal L -\frac p{r^{n^2}}\right)\leq \ln(r)^{-1/2}r^{n^2}\int_{n\ln(r)^{1/2}}^\infty e^{-y^2}dy=\epsilon$$
Where $\epsilon$ can be made arbitrarily small. Then
$$0<\mathcal L -\frac p{r^{n^2}}\leq\frac \epsilon {r^{n^2}}$$
Let $r^{n^2}=q$. If $\mathcal L$ were rational, say $\frac ab$
$$0<\frac ab-\frac pq=\frac{aq-bp}{bq}\leq \frac\epsilon q$$
$$aq-bp>0$$
$$aq-bp\leq\epsilon b$$
The LHS is a positive integer, and the RHS can be made arbitrarily small. Contradiction.<|endoftext|>
TITLE: Physics and the Apéry constant, an example for mathematicians
QUESTION [8 upvotes]: The Wikipedia's entry for the Apéry's constant tell us that the Apéry's constant $$\zeta(3)=\sum_{n=1}^\infty\frac{1}{n^3}$$ arises in physical problems.
Question. Can you tell us, from a divulgative viewpoint but with mathematical details if it is possible, a nice physical problem involving the Apéry's constant? Many thanks.
I believe that it is a curiosity, but and you know mathematics but and an example of such concise problem in physics (see the problems, or other, that refers Wikipedia if you know to show/explain us the calculations after your introduction to the physic problem), then your answer should be nice for all us.
REPLY [3 votes]: The $\zeta(3)$ constant appears in fluid mechanics as the added mass of a sphere approaching a wall, such as a raindrop (Weihs & Small, 1975), in the form $3\zeta(3) -2$.
Weihs, D.; Small, R. D., An exact solution of the motion of two adjacent spheres in axisymmetric potential flow, Israel J. Technol. 13, 1-6 (1975). ZBL0318.76010.<|endoftext|>
TITLE: Closed form for $\int_{0}^{\infty }\!{\rm erf} \left(cx\right) \left( {\rm erf} \left(x \right) \right) ^{2}{{\rm e}^{-{x}^{2}}}\,{\rm d}x$
QUESTION [7 upvotes]: I encountered this integral in my calculations:
$$\int_{0}^{\infty }\!{\rm erf} \left(cx\right) \left( {\rm erf} \left(x
\right) \right) ^{2}{{\rm e}^{-{x}^{2}}}\,{\rm d}x$$
where $c>0$ and $c\in \mathbb{R}$
but could not find a closed-form representation for it.
I also tried to find possible closed forms using Inverse Symbolic Calculator and WolframAlpha but they did not find anything.
I was looking in the book "Integrals ans Series Volume 2-Prudnikov-Brychkov-Marychev" but did not find a similar formula.
I am not sure it exists, but if it exists I want to know it. Closed forms are easier to manipulate, sometimes closed forms of different integrals or sums contain terms that cancel each other etc
Could you please help me to find a closed form (even using non-elementary special functions), if it exists?
REPLY [2 votes]: Let us denote :
\begin{equation}
{\mathcal I}(c) := \int\limits_0^\infty \operatorname{erf}(c x) \cdot [\operatorname{erf}( x)]^2 e^{-x^2} dx
\end{equation}
By differentiating with respect to the parameter $c$ we have:
\begin{equation}
\frac{ d }{d c} {\mathcal I}(c) = \frac{2^2}{\pi^{3/2}} \frac{1}{1+c^2} \cdot \frac{1}{\sqrt{2+c^2}} \cdot \arctan\left( \frac{1}{\sqrt{2+c^2}}\right)
\end{equation}
therefore the only thing we need to do is to integrate the right hand side. I have calculated a more generic integral that involves this one as a special case in A generalized Ahmed's integral . Here I only state the result:
\begin{eqnarray}
&&{\mathcal I}(c) = \frac{4}{\pi^{3/2}} \left(\right.\\
&& \arctan( \frac{c}{\sqrt{2+c^2}}) \arctan( \frac{1}{\sqrt{2+c^2}})+\\
&& \frac{\imath}{2} \left.\left[
{\mathcal F}^{(\alpha_-,+e^{-\imath \phi})}(t)+
{\mathcal F}^{(\alpha_-,-e^{-\imath \phi})}(t)-
{\mathcal F}^{(\alpha_-,-e^{+\imath \phi})}(t)-
{\mathcal F}^{(\alpha_-,+e^{+\imath \phi})}(t)
\right]\right|_0^B-\\
&& \frac{\imath}{2} \left.\left[
{\mathcal F}^{(\alpha_+,+e^{-\imath \phi})}(t)+
{\mathcal F}^{(\alpha_+,-e^{-\imath \phi})}(t)-
{\mathcal F}^{(\alpha_+,-e^{+\imath \phi})}(t)-
{\mathcal F}^{(\alpha_+,+e^{+\imath \phi})}(t)
\right]\right|_0^B
\left.\right)
\end{eqnarray}
where $\alpha_- = \sqrt{2}-1$, $\alpha_+:=\sqrt{2}+1$, $\phi:= \arccos(1/\sqrt{3})$, $B:=(-\sqrt{2}+\sqrt{2+c^2})/c$
and
\begin{eqnarray}
&&{\mathcal F}^{(a,b)}(t):=\int \arctan(\frac{t}{a}) \frac{1}{t-b} dt = \log(t-b) \arctan(\frac{t}{a})\\
&&-\frac{1}{2 \imath} \left( \log(t-b) \left[ \log(\frac{t-\imath a}{b-\imath a}) - \log(\frac{t+\imath a}{b+\imath a})\right] + Li_2(\frac{b-t}{b-\imath a}) - Li_2(\frac{b-t}{b+\imath a})\right)
\end{eqnarray}
Update:
Note that the anti-derivative ${\mathcal F}^{(a,b)}(t)$ may have a jump. This will happen if and only if either the quantity $(t+\imath a)/(b+\imath a)$ crosses the negative real axis or the quantity $(t-\imath a)/(b-\imath a)$ crosses the negative real axis both for some $t\in(0,B)$. This has an effect that the argument of the logarithm jumps by $2\pi$. In order to take this into account we have to exclude from the integration region a small vicinity of the singularity in question. In other words the correct formula reads:
\begin{eqnarray}
&&{\mathcal I}(c) = \frac{4}{\pi^{3/2}} \left(\right.\\
&& \arctan( \frac{c}{\sqrt{2+c^2}}) \arctan( \frac{1}{\sqrt{2+c^2}})+\\
&& \frac{\imath}{2}\left[
{\bar {\mathcal F}}^{(\alpha_-,+e^{-\imath \phi})}(0,B)+
{\bar {\mathcal F}}^{(\alpha_-,-e^{-\imath \phi})}(0,B)-
{\bar {\mathcal F}}^{(\alpha_-,-e^{+\imath \phi})}(0,B)-
{\bar {\mathcal F}}^{(\alpha_-,+e^{+\imath \phi})}(0,B)
\right]-\\
&& \frac{\imath}{2} \left[
{\bar {\mathcal F}}^{(\alpha_+,+e^{-\imath \phi})}(0,B)+
{\bar {\mathcal F}}^{(\alpha_+,-e^{-\imath \phi})}(0,B)-
{\bar {\mathcal F}}^{(\alpha_+,-e^{+\imath \phi})}(0,B)-
{\bar {\mathcal F}}^{(\alpha_+,+e^{+\imath \phi})}(0,B)
\right]
\left.\right)
\end{eqnarray}
where
\begin{eqnarray}
{\bar {\mathcal F}}^{a,b}(0,B) &:=&
{\mathcal F}^{(a,b)}(B)-{\mathcal F}^{(a,b)}(A) +\\
&&
1_{t^{(*)}_+ \in (0,1)} \left( -{\mathcal F}^{(a,b)}(B(t^{(*)}_+ +\epsilon))+{\mathcal F}^{(a,b)}(B(t^{(*)}_+ -\epsilon))\right)+\\
&&
1_{t^{(*)}_- \in (0,1)} \left( -{\mathcal F}^{(a,b)}(B(t^{(*)}_- +\epsilon))+{\mathcal F}^{(a,b)}(B(t^{(*)}_- -\epsilon))\right)
\end{eqnarray}
where
\begin{eqnarray}
t^{(*)}_\pm:= \frac{Im[\mp \imath a(\bar{b} \mp \imath a)]}{B Im[\bar{b} \mp \imath a]}
\end{eqnarray}
See Mathematica code below for testing:
F[t_, a_, b_] :=
Log[t - b] ArcTan[t/a] -
1/(2 I) (Log[
t - b] (Log[(t - I a)/(b - I a)] -
Log[(t + I a)/(b + I a)]) + PolyLog[2, (b - t)/(b - I a)] -
PolyLog[2, (b - t)/(b + I a)]);
FF[A_, B_, a_, b_] := Module[{res, rsp, rsm, tsp, tsm, eps = 10^(-9)},
res = F[B, a, b] - F[A, a, b];
tsp = -(Im[I a (Conjugate[b] - I a)]/(B Im[Conjugate[b] - I a]));
tsm = +(Im[I a (Conjugate[b] + I a)]/(B Im[Conjugate[b] + I a]));
(*
If[0\[LessEqual] tsp\[LessEqual]1,Print["Jump +!!"]];
If[0\[LessEqual] tsm\[LessEqual]1,Print["Jump -!!"]];
*)
rsp = If[
0 <= tsp <= 1, -F[A + (tsp + eps) (B - A), a, b] +
F[A + (tsp - eps) (B - A), a, b], 0];
rsm = If[
0 <= tsm <= 1, -F[A + (tsm + eps) (B - A), a, b] +
F[A + (tsm - eps) (B - A), a, b], 0];
res + rsp + rsm
];
For[count = 1, count <= 100, count++,
c = RandomReal[{-10, 10}, WorkingPrecision -> 50];
x1 = NIntegrate[Erf[c x] Erf[x]^2 Exp[-x^2], {x, 0, Infinity},
WorkingPrecision -> 30];
4/Pi^(3/2)
NIntegrate[
1/(1 + xi^2) 1/Sqrt[2 + xi^2] ArcTan[1/Sqrt[2 + xi^2]], {xi, 0,
c}];
A1 = 1; A2 = 1; A3 = c;
phi = ArcCos[1/Sqrt[3]]; B = (-Sqrt[2] + Sqrt[2 + c^2])/c;
4/Pi^(3/
2) ((ArcTan[c/Sqrt[2 + c^2]]) ArcTan[1 /Sqrt[2 + c^2]] +
4 Sqrt[2]
NIntegrate[(ArcTan[t/(Sqrt[2] - 1)] -
ArcTan[t/(Sqrt[2] + 1)])
t/((1 - t^2)^2 + (2 ) (1 + t^2)^2), {t,
0, (-Sqrt[2] + Sqrt[2 + c^2])/c}, WorkingPrecision -> 30]);
4/Pi^(3/
2) ((ArcTan[c/Sqrt[2 + c^2]]) ArcTan[1 /Sqrt[2 + c^2]] +
I/2 NIntegrate[(ArcTan[t/(Sqrt[2] - 1)] -
ArcTan[t/(Sqrt[2] + 1)]) (1/(t - E^(-I phi)) -
1/(t - E^(I phi)) + 1/(t + E^(-I phi)) -
1/(t + E^(I phi))), {t, 0, (-Sqrt[2] + Sqrt[2 + c^2])/c},
WorkingPrecision -> 30]);
x2 = 4/Pi^(3/2) ((ArcTan[c/Sqrt[2 + c^2]]) ArcTan[1/Sqrt[2 + c^2]] +
I/2 (FF[0, B, (Sqrt[2] - 1), 1/Sqrt[3] - I Sqrt[2/3]] +
FF[0, B, (Sqrt[2] - 1), -(1/Sqrt[3]) + I Sqrt[2/3]] -
FF[0, B, (Sqrt[2] - 1), -(1/Sqrt[3]) - I Sqrt[2/3]] -
FF[0, B, (Sqrt[2] - 1), 1/Sqrt[3] + I Sqrt[2/3]]) -
I/2 (FF[0, B, (Sqrt[2] + 1), 1/Sqrt[3] - I Sqrt[2/3]] +
FF[0, B, (Sqrt[2] + 1), -(1/Sqrt[3]) + I Sqrt[2/3]] -
FF[0, B, (Sqrt[2] + 1), -(1/Sqrt[3]) - I Sqrt[2/3]] -
FF[0, B, (Sqrt[2] + 1), 1/Sqrt[3] + I Sqrt[2/3]]));
If[Abs[x2/x1 - 1] > 10^(-3),
Print["results do not match..", {c, {x1, x2}}]; Break[]];
If[Mod[count, 10] == 0, PrintTemporary[count]];
];<|endoftext|>
TITLE: Strengthening the Sylvester-Schur Theorem
QUESTION [6 upvotes]: The Sylvester-Schur Theorem states that if $x > k$, then in the set of integers: $x, x+1, x+2, \dots, x+k-1$, there is at least $1$ number containing a prime divisor greater than $k$.
It has always struck me that this theorem is significantly weaker than the actual reality, especially as $n$ gets larger.
As I was trying to check my intuition, I had the following thought:
Let $k$ be any integer greater than $1$
Let $p_n$ be the $n$th prime such that $p_n \le k < p_{n+1}$.
If an integer $x$ is sufficiently large, then it follows that in the set of integers: $x, x+1, x+2, \dots, x+k-1$, there are at least $k-n$ numbers containing a prime divisor greater than $k$.
Here's my argument:
(1) Let $k > 1$ be an integer with $p_n \le k < p_{n+1}$ where $p_n$ is the $n$th prime.
(2) Let $x > 2p_n$ be an integer
(3) Let $0 \le t_1 < p_n$ be the smallest integer greater than $x$ such that $gpf(x+t_1) \le p_n$ where gpf() = greatest prime factor.
(4) It is clear that $x+t_1$ consists of at least one prime divisor $q$ where $q \le p_n$
(5) Let $t_1 < t_2 < p_n$ be the second smallest integer greater than $x$ such that $gpf(x+t_2) \le p_n$.
(6) Let $f = gcd(x + t_1,t_2 - t_1)$ where gcd() = greatest common divisor.
(7) Let $u = \frac{x+t_1}{f}, v = \frac{t_2-t_1}{f}$ so that $u > 2$ and $1 \le v < p_n$ and $gcd(u+v,x+t_1)=1$
(8) $x+t_2 = uf + vf = f(u+v)$ and since $u+v > 3$, there exists a prime $q$ that divides $u+v$ but does not divide $w+t_1$.
(9) Let $t_2 \le t_3 < p_n$ be the third smallest integer greater than $x$ such that $gpf(x+t_3) \le p_n$
(10) We can use the same arguments as steps (5) thru steps (8) to show that $x+t_3$ contains a prime divisor relatively prime to $x+t_1$ and relatively prime to $x+t_2$
Let $f_1 = gcd(x+t_1,t_3-t_1), u_1 = \frac{x+t_1}{f_1}, v_1 = \frac{t_3-t_1}{f1}$
Let $f_2 = gcd(x+t_2,t_3-t_2), u_2 = \frac{x+t_2}{f_2}, v_2 = \frac{t_3-t_2}{f_2}$
$x+t_3 = f_1(u_1 + v_1) = f_2(u_2 + v_2)$ and $gcd(u_1 + v_1,x+t_1)=1, gcd(u_2 + v_2,x+t_2)=1$
Let $h = gcd(f_1,f_2)$ so that $gcd(\frac{f_1}{h},\frac{f_2}{h})=1$
Then, $\frac{f_1}{h}(u_1 + v_1) = \frac{f_2}{h}(u_2+v_2)$
And: $\frac{u_1+v_1}{\frac{f_2}{h}} = \frac{u_2+v_2}{\frac{f_1}{h}}$
(11) We can repeat this argument until $x+t_n$ at which point there are no more primes less than or equal to $p_n$.
(12) We can thus use this same argument to show that all remaining integers in the sequence $x,x+1, x+2, \dots x+k-1$ have at least one prime divisor greater than $p_n$.
Of course, in order to make this argument, $x$ may well need to be greater than $(p_n) ^ n$ since I am assuming that at each point $\frac{u_i + v_i}{\frac{f_i}{h}} > p_n$.
Is my reasoning sound?
Is this a known property of large numbers?
Is there a more precise formulation for smaller numbers? For example, my argument seems like it could be improved to argue that for $x > 2p_n$, there are at least $2$ numbers with a prime divisor greater than $p_n$.
Edit: I found a simpler argument (modified on 12/28/2017)
Let $w > 1$ be an integer
Let $p_n$ be the $n$th prime such that $p_n \le w < p_{n+1}$
Let $R(p,w)$ be the largest integer $r$ such that $p$ is a prime and $p^r \le w$ but $p^{r+1} > w$
Let $x > \prod\limits_{p < w} p^{R(p,w)}$ be an integer
Let $i$ be an integer such that $0 \le i < w$
I claim that if $gpf(x+i) \le p_n$, then there exists $k,v$ such that $1 \le k \le n$ and $(p_k)^v \ge w$ and $(p_k)^v | x+i$
Assume no such $k,v$ exists. It follows that each $x+i \le \prod\limits_{p < w} R(p,w)$ which goes against assumption.
I also claim that there are at most $n$ instances where $gpf(x+1) \le p_n$.
Assume that there exists integers $v_2 > v_1$ and $i \ne j$ where $(p_k)^{v_1} | x+i$ and $(p_k)^{v_2} | x+j$.
Then there exists positive integers $a,b$ such that $a(p_k)^{v_1} = x+i$ and $b(p_k)^{v_2} = x+j$
Let $u = x+j - x - i = j - i = (p_k)^{v_1}(a - b(p_k)^{v_2 - v_1})$
We can assume $u$ is positive since if it were negative, we could set $u = x+i - x - j$ instead.
We can assume therefore that $a - b(p_k)^{v_2 - v_1} \ge 1$.
But now we have a contradiction since $w > j - i$ but $(p_k)^{v_1} \ge w$.
REPLY [4 votes]: I think your second proof is correct. I'm going to rewrite it:
Theorem (Sylvester's theorem generalization):
Let $n,k\in\mathbb{N}$ with $n\geq$ lcm$(1,\ldots,k)$, and let $\pi(x):=\sum_{p\leq x} 1$ be the number of primes not greater than $x$. Then in the interval $[n,n+k]$ there are at least $k+1-\pi(k)$ integers $n_i$ with a prime factor $p_i>k$.
Proof: For $p$ prime let $\nu_p(k)$ be the $p$-adic valuation of $k$. Let gpf$(x)$ be the greatest prime factor of $x$ and $p_j$ be the $j$-th prime. Consider $0\leq i\leq k$.
Suppose that $i$ is such that gpf$(n+i)\leq p_{\pi(k)}$ ($p_{\pi(k)}$ is the greatest prime not greater than $k$). Then there exist a prime $p_i\leq p_{\pi(k)}$ and an exponent $v_i\in\mathbb{N}$ such that $p_i^{v_i}|n+i$ and $p_i^{v_i}>k$, as otherwise
$$n+i\leq\displaystyle\prod_{p\leq k}p^{v_p(k)}=\text{lcm}(1,\ldots,k)k$. Therefore $p_i\neq p_j$.
Thus, to every integer $i$ such that gpf$(n+i)\leq p_{\pi(k)}$ there corresponds a different prime $p_i\leq p_{\pi(k)}$, so that there can be at most $\pi(k)$ integers of this form. Hence there are at least $k+1-\pi(k)$ numbers $n+i\in [n,n+k]$ such that gpf$(n+i)\geq p_{\pi(k)+1}>k$.
Corollary (Grimm's conjecture): If $n\geq$lcm$(1,\ldots,k)$, then for every integer $n_i\in[n,n+k]$ there is a different prime $p_i$ such that $p_i|n_i$ (i.e., Grimm's conjecture is true for this choice of $n$ and $k$).
Proof: If gpf$(n+i)\leq p_{\pi(k)}$, pick $p_i$ (we already know $p_i\neq p_j$ if $i\neq j$). Otherwise gpf$(n+i)>k$ and this factor cannot divide any other $n+j$ with $i\neq j\leq k$.
In fact, the two results are equivalent:
Lemma: Grimm's implies Sylvester's.
Proof: If there is a different prime $p_i|n_i$ for every $n_i\in[n,n+k]$, then as there are $\pi(k)$ primes below $k$, there must be at least $k+1-\pi(k)$ numbers $n_i$ such that $p_i>k$.
Now that I have put it like this, I realize that this theorem (and its proof!) are a particular case of Theorem 1 of M. Langevin, Plus grand facteur premier d'entiers en progression arithmétique, Séminaire Delange-Pisot-Poitou. Théorie des nombres (1976-1977), 18(1), 1-7. So this was known (although perhaps not very well known!).
Observe that Langevin manages to prove the result with the less restrictive condition that $n+i$ does not divide lcm$(1,\ldots,k)$ for any $i\in\{0,\ldots,k\}$. We can adapt your proof to get this condition: if gpf$(n+i)\leq p_{\pi(k)}$ and $n+i\not|$lcm$(1,\ldots,k)$ then there must be a prime $p_i\leq p_{\pi(k)}$ and an exponent $v_i\in\mathbb{N}$ such that $p_i^{v_i}|n+i$ and $p_i^{v_i}>k$. The proof then follows as before.<|endoftext|>
TITLE: Transition from a Riemann sum to an Integral
QUESTION [5 upvotes]: The Riemann sum over an interval $[a,b]$ is usually defined as
$$\lim\limits_{N\to\infty}\sum\limits_{k=0}^Nf\left(a+k\cdot\frac{b-a}{N}\right)\frac{b-a}{N}$$
Thus if we encounter a sum of the form
$$\lim\limits_{N\to\infty}\sum\limits_{k=0}^Nf\left(k\cdot\frac{1}{N}\right)\frac{1}{N}$$
we can conclude that it is equal to an integral over the interval $[0,1]$.
$$\lim\limits_{N\to\infty}\sum\limits_{k=0}^Nf\left(k\cdot\frac{1}{N}\right)\frac{1}{N}=\int_0^1f(x)dx\tag{1}\label{1}$$
What can we conclude about the following sum
$$\lim\limits_{N\to\infty}\lim\limits_{M\to\infty}\sum\limits_{k=0}^M f\left(k\cdot\frac{1}{N}\right)\frac{1}{N}\tag{2}\label{2}$$
To clarify, this is an infinite sum \eqref{2}, that differs from the Riemann sum \eqref{1}, in the upper limit of the sum. In the Riemann sum \eqref{1}, there is a relation between $M$ and $N$, namely $N=M$, while there is no such relation specified in \eqref{2}. If we can equate it to an integral, how are we to determine the limits of integration?
The equation \eqref{2} is to be taken, that the $M\to\infty$, we thus have an infinite sum (suppose it is convergent). Than we form a sequence of infinite sums, where $N$ increases for each element of the sequence. That is
$$S_N=\lim\limits_{M\to\infty}\sum\limits_{k=0}^M f\left(k\cdot\frac{1}{N}\right)\frac{1}{N}$$
What does this sequence tend to?
Is it true that (or when is it true)
$$\lim\limits_{N\to\infty}S_N=\int_0^\infty f(x)dx$$
Also the general term in \eqref{2} is $C_k=f\left(k\cdot\frac{1}{N}\right)$. How does it behave in the limit, namely
$$\lim\limits_{N\to \infty}\lim\limits_{M\to \infty}f\left(M\cdot\frac{1}{N}\right)$$
REPLY [4 votes]: Suppose $f$ is Riemann integrable on $[0,b]$ for every $b > 0$ and the improper integral over $[0, \infty)$ is convergent.
We first consider the case where $f$ is nonnegative and non-increasing, as suggested by @Winther, where we have
$$\frac{f((k+1)/N)}{N} \leqslant \int_{k/N}^{(k+1)/N} f(x) \, dx \leqslant \frac{f(k/N)}{N}. $$
This implies
$$\int_0^{(M+1)/N} f(x) \, dx \leqslant \frac{1}{N} \sum_{k=0}^{M} f(k/N) \leqslant \frac{f(0)}{N} + \int_0^{M/N} f(x) \, dx. $$
The sequence of partial sums is increasing and bounded , hence convergent as $M \to \infty$, with
$$\int_0^{\infty} f(x) \, dx \leqslant\frac{1}{N} \sum_{k=0}^{\infty} f(k/N) \leqslant \frac{f(0)}{N} + \int_0^{\infty} f(x) \, dx. $$
Therefore,
$$\lim_{N \to \infty} \lim_{ M \to \infty}\frac{1}{N} \sum_{k=0}^{M} f(k/N) = \int_0^\infty f(x) \, dx.$$
Can this still hold if $f$ is not monotonic?
For example, consider $f(x) = \sin x /x$, where
$$\int_0^\infty \frac{\sin x}{x} \, dx = \frac{\pi}{2}.$$
Examining the corresponding series (WLOG starting with $k=1$) we find
$$\frac{1}{N}\sum_{k = 1}^{\infty} \frac{\sin (k/N)}{k/N} = \sum_{k = 1}^{\infty} \frac{\sin (k/N)}{k} \\ = \frac{\pi}{2}-\frac{1}{2N} \\ \longrightarrow_{N \to \infty} \frac{\pi}{2}.$$
I have not yet found a counterexample for a non-monotone function. As the integral test can be generalized to $C^1$ functions of bounded variation, I suspect this may characterize a wider class of functions for which this result holds.
This could be shown by considering
$$\left|\int_{k/N}^{(k+1)/N} f(x) \, dx - \frac{f(k/N)}{N} \right| \leqslant \int_{k/N}^{(k+1)/N} |f(x) - f(k/N)| \, dx $$
and then summing over $k$, applying the mean value theorem when $f$ is differentiable, and using $\int_0^\infty |f'(x)|\, dx < \infty$ to show that the sum converges to the integral as $N \to \infty.$<|endoftext|>
TITLE: On an expansion of $(1+a+a^2+\cdots+a^n)^2$
QUESTION [6 upvotes]: Question: What is an easy or efficient way to see or prove that
$$
1+2a+3a^2+\cdots+na^{n-1}+(n+1)a^n+na^{n+1}+\cdots+3a^{2n-2}+2a^{2n-1}+a^{2n}\tag{1}
$$
is equal to
$$
(1+a+a^2+\cdots+a^n)^2\tag{2}
$$
Maybe this is a particular case of a more general, well-known result?
Context: This is used with $a:=e^{it}$ to get an expression in terms of $\sin$ for the Fejér kernel.
Thoughts: I thought about calculating the coefficient $c_k$ of $a^k$. But my method is not so obvious that we can get from $(1)$ to $(2)$ in the blink of an eye.
$\mathbf{k=0}$ : clearly $c_0=1$.
$\mathbf{1\leq k\leq n}$ : $c_k$ is the number of integer solutions of $x_1+x_2=k$ with $0\leq x_1,x_2\leq k$, which in turn is the number of ways we can choose a bar $|$ in
$$
\underbrace{|\star|\star|\cdots|\star|}_{k\text{ stars}}
$$
So $c_k=k+1$.
$\mathbf{k=n+i\quad(1\leq i\leq n)}$ : $c_k$ is the number of integer solutions to $x_1+x_k=n+i$ with $0\leq x_1,x_2\leq n$, which in turn is the number of ways we can choose a bar $|$ in
$$
\underbrace{|\star|\star|\cdots|\star|}_{n+i\text{ stars}}
$$
different from the $i$-th one from each side. So $c_k=(n+i)+1-2i=n-i+1$.
REPLY [2 votes]: Hint:
Use synthetic division twice after you you've rewritten the expression as
$$\frac{(a^{n+1}-1)^2}{(a-1)^2}=\frac{a^{2n+2}-2a^{n+1}+1}{(a-1)^2}$$
$$\begin{array}{*{11}{r}}
&1&0&0&\dotsm&0&-2&0&0&\dots&0&0&1\\
&\downarrow&1&1&\dotsm&1&1&-1&-1&\dotsm&-1&-1&-1\\
\hline
\times1\quad&1&1&1&\dotsm&1&-1&-1&-1&\dotsm&-1&-1&0\\
&\downarrow&1&2&\dotsm&n&n+1&n&n-1&\dotsm&2&1\\
\hline
\times1\quad&1&2&3&\dotsm&n+1&n&n-1&n-2&\dotsm&1&0
\end{array}$$<|endoftext|>
TITLE: The integral of $\left|\frac{\cos x}x\right|$
QUESTION [7 upvotes]: I'm looking to determine whether the following function is unbounded or not:
$$
F(x) = \int_1^x\left|\frac{\cos t}{t}\right|\text{d} t
$$
I can't seem to do much with it because of the $|\cos(t)|$. I thought of using the fact that $\int |f| \ge |\int f|$, but the problem is that the integral of $\frac{\cos t}t$ (without the absolute values) is bounded, and so that doesn't prove that $F(x)$ is unbounded or bounded. I tried re-expressing this as a cosine integral (the function $\text{Ci}(x)$) but to no avail. I'm not sure where else to go with this; the main problem seems to be the fact that its very difficult to derive an inequality with the $|\cos(t)|$ without a $|\cos(t)|$ on the other side of the inequality (or at least some trig function).
Any help would be appreciated.
REPLY [5 votes]: Hint:
Consider the harmonic series and
$$\int_{\pi/2 + k\pi}^{3\pi/2 + k\pi} \frac{| \cos t|}{t} \, dt \geqslant \frac{1}{3\pi/2 + k \pi}\int_{\pi/2 + k\pi}^{3\pi/2 + k\pi} |\cos t| \, dt = \frac{2}{3\pi/2 + k \pi}$$<|endoftext|>
TITLE: How to integrate $\int_{0}^{1} \frac{1-x}{1+x} \frac{dx}{\sqrt{x^4 + ax^2 + 1}}$?
QUESTION [20 upvotes]: The question is how to show the identity
$$ \int_{0}^{1} \frac{1-x}{1+x} \cdot \frac{dx}{\sqrt{x^4 + ax^2 + 1}} = \frac{1}{\sqrt{a+2}} \log\left( 1 + \frac{\sqrt{a+2}}{2} \right), \tag{$a>-2$} $$
I checked this numerically for several cases, but even Mathematica 11 could not manage this symbolically for general $a$, except for some special cases like $a = 0, 1, 2$.
Addendum. Here are some backgrounds and my ideas:
This integral came from my personal attempt to find the pattern for the integral
$$ J(a, b) := \int_{0}^{1} \frac{1-x}{1+x} \cdot \frac{dx}{\sqrt{1 + ax^2 + bx^4}}. $$
This drew my attention as we have the following identity
$$ \int_{0}^{\infty} \frac{x}{x+1} \cdot \frac{dx}{\sqrt{4x^4 + 8x^3 + 12x^2 + 8x + 1}} = J(6,-3), $$
where the LHS is the integral from this question. So establishing the claim in this question amounts to showing that $J(6,-3) = \frac{1}{2}\log 3 - \frac{1}{3}\log 2$, though I am skeptical that $J(a, b)$ has a nice closed form for every pair of parameters $(a, b)$.
A possible idea is to write
\begin{align*}
&\int_{0}^{1} \frac{1-x}{1+x} \cdot \frac{dx}{\sqrt{x^4 + ax^2 + 1}} \\
&\hspace{5em}= \int_{0}^{1} \frac{(x^{-2} + 1) - 2x^{-1}}{x^{-1} - x} \cdot \frac{dx}{\sqrt{(x^{-1} - x)^2 + a + 2}}
\end{align*}
This follows from a simple algebraic manipulation. This suggests that we might be able to apply Glasser's master theorem, though in a less trivial way.
I do not believe that this is particularly hard, but I literally have not enough time to think about this now. So I guess it is a good time to seek help.
REPLY [4 votes]: $$\text{let } \ \frac{1-x}{1+x}=t \Rightarrow x=\frac{1-t}{1+t}\Rightarrow dx=-\frac{2}{(1+t)^2}dt\ \text{ then:}$$
$$\int_0^1 \frac{1-x}{1+x}\frac{dx}{\sqrt{x^4+ax^2+1}}=\int_0^1 \frac{2t}{\sqrt{(a+2)t^4-2(a-6)t^2+(a+2)}}dt$$
$$\overset{t^2=x}=\frac{1}{\sqrt{a+2}}\int_0^1 \frac{dx}{\sqrt{x^2-2\left(\frac{a-6}{a+2}\right)x+1}}=\boxed{\frac{1}{\sqrt{a+2}}\ln \left(1+\frac{\sqrt{a+2}}{2}\right),\quad a>-2}$$<|endoftext|>
TITLE: Linear isometry between Hilbert spaces has a closed range
QUESTION [5 upvotes]: Let $U$ be a linear isometry between Hilbert spaces. Why does the fact that the range of $U$ is dense imply that the range of $U$ is closed?
I am trying to understand the proof of theorem 5.4 in Conway's A Course in Functional Analysis.
REPLY [9 votes]: In fact, the range of a linear isometry $U \colon H \rightarrow H'$ between Hilbert spaces must always be closed. If the range of $U$ is also dense then $U(H) = H'$ so $U$ is one-to-one and onto.
The reason is that the range $U(H)$ of $U$ is complete and a complete subspace of a normed space must be closed. To see that $U(H)$ is complete, let $Ux_n$ be a Cauchy sequence in $U(H)$ so $\| Ux_n - Ux_m \|_{H'} \rightarrow 0$. Since $U$ is an isometry, $\| x_n - x_m \|_{H} = \| Ux_n - Ux_m \|_{H'}$ and so $(x_n)$ is Cauchy in $H$. Since $H$ is complete, $x_n \rightarrow x$ for some $x \in H$ but then $Ux_n \rightarrow Ux$ .<|endoftext|>
TITLE: What method did this person use to rotate the points in 2D Space to imitate 3D Rotation?
QUESTION [5 upvotes]: I've been wondering about how people seem to rotate graphs on a 2D area, and came across this Desmos 2D graph, found Here (desmos.com). Once I saw this, I looked at the equations and was blown by the complexity to rotate them with different variables ($a$, $b$, and $c$). An example of the one of the equations of the points, which are cartesian in the format of $(x,y)$:
$\left(\cos (u)\cos (v)-\sin (u)\cos (v)+\sin (v),\sin (u)\sin (w)-\cos (u)\sin (v)\cos (w)+\sin (u)\sin (v)\cos (w)+\cos (u)\sin (w)+\cos (v)\cos (w)\right)$
Quite the long equation to find a point- but understandable. I'm just interested in knowing the mathematical reasoning behind him using the functions to find the location of the points, not the lines (They are just connecting multiple points). Is this related to the rotation matrix in any way? Or is it using something else that I could possibly know the name of so I could pursue future research? Thanks!
REPLY [5 votes]: Set all parameters $a, b, c$ to $0$ and then you get a square on the $xy$ plane with vertices at $(\pm 1, \pm 1$). Imagine that this is what you see when you look at a cube in $\mathbb{R}^3$ whose vertices are $(\pm 1, \pm 1, \pm 1)$ "from above" (that is, from the $z$ axis to the $xy$ plane). From this perspective, the upper face of the cube (with vertices $(\pm 1, \pm 1, 1)$) completely hides away the lower face of the cube (with vertices $(\pm 1, \pm 1, -1)$) and so you can only see a square.
Now, if you change the $b$ parameter (where $b = \pi v$), the square gets rotated on the $xy$ plane. This corresponds to rotating the cube around the $z$-axis. Changing the $a$ parameter (where $a = \pi u$) corresponds to rotating the cube around the $y$ axis and changing the $c$ parameter (where $c = \pi w$) corresponds to rotating the cube around the $x$ axis. Each such rotation can be done by multiplication with an appropriate rotation matrix. In this case, a possible formula for transforming a point $(x,y,z)^T$ is given by
$$ \begin{pmatrix} x \\ y \\ z \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos w & \sin w \\ 0 & -\sin w & \cos w \end{pmatrix} \begin{pmatrix} \cos v & \sin v & 0 \\ -\sin v & \cos v & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \cos u & 0 & -\sin u \\ 0 & 1 & 0 \\ \sin u & 0 & \cos u \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} \\
= \begin{pmatrix} \cos u \cos v & \sin v & -\cos v \sin u \\
\sin u \sin w - \cos u \cos w \sin v & \cos v \cos w & \cos w \sin u \sin v + \cos u \sin w \\
\cos w \sin u + \cos u \sin v \sin w & -\cos v \sin w & \cos u \cos w - \sin u \sin v \sin w\end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}. $$
This corresponds to performing first a rotation in the $xz$ plane, then a rotation in the $xy$ plane and finally a rotation in the $yz$ plane (and this is indeed the formula the application uses).
We look at the picture from above and so we are interested only in the $xy$-components of the result giving us
$$ \begin{pmatrix} x \\ y \\ z \end{pmatrix} \mapsto \begin{pmatrix} x \cos u \cos v + y \sin v - z \cos v \sin u \\
x (\sin u \sin w - \cos u \cos w \sin v) + y \cos v \cos w + z (\cos w \sin u \sin v + \cos u \sin w )\end{pmatrix}. $$
For example, point number $7$ in the application correspond to the vertex $(1,1,1)$ of the cube. Hence, the formula for changing $(1,1,1)$ in terms of $u,v,w$ is given by
$$ \begin{pmatrix} \cos u \cos v + \sin v - \cos v \sin u \\
\sin u \sin w - \cos u \cos w \sin v + \cos v \cos w + \cos w \sin u \sin v + \cos u \sin w \end{pmatrix} $$
which is the formula you wrote in the question.
To summarize, the person who wrote the application considered a cube in 3D and projected it (orthogonally) to the $xy$ plane (2D). By modifying three parameters, you can rotate the vertices and edges of the cube in 3D and see the projected result in 2D.<|endoftext|>
TITLE: Is every open set the interior of a closed set?
QUESTION [30 upvotes]: I am wondering if this is generally true for any topology. I think there might be counter examples, but I am having trouble generating them.
REPLY [17 votes]: Since the complement of an open set is closed (and vice versa), and since the complement of the interior is the closure of the complement, we can rephrase your question equivalently as:
Is every closed set the closure of some open set?
This immediately suggests a counterexample: any singleton (i.e. a set containing only one point) is closed in $\mathbb R^n$ (with the usual Euclidean topology), but has no non-empty open subsets that it could be the closure of.
Conversely, the complement of any singleton (i.e. $\mathbb R^n \setminus \{x\}$ for any $x \in \mathbb R^n$) provides a counterexample to your original claim, being an open set that cannot be the interior of any closed set.<|endoftext|>
TITLE: How can we show that $4\arctan\left({1\over \sqrt{\phi^3}}\right)-\arctan\left({1\over \sqrt{\phi^6-1}}\right)={\pi\over 2}$
QUESTION [6 upvotes]: $$4\arctan\left({1\over \sqrt{\phi^3}}\right)-\arctan\left({1\over \sqrt{\phi^6-1}}\right)={\pi\over 2}\tag1$$
$\phi$;Golden ratio
I understand that we can use
$$\arctan{1\over a}+\arctan{1\over b}=\arctan{a+b\over ab-1}$$
that would take quite a long time and simplifying algebraic expressions involving surds is also difficult task to do.
How else can we show that $(1)={\pi\over 2}$?
REPLY [2 votes]: A detailed and autonomous prove (without reference to previous publication) :<|endoftext|>
TITLE: construct a square without a ruler
QUESTION [11 upvotes]: How can I construct a square using only a pencil and a compass, i.e. no ruler.
Given: a sheet of paper with $2$ points marked on it, a pencil and a compass.
Aim: plot $2$ vertices such that the $4$ of them form a square using only a compass.
P.S.: no cheap tricks involved.
REPLY [5 votes]: The key to solve this problem is how to construct $\sqrt{2}$.<|endoftext|>
TITLE: Does the unique map on the zero space have determinant 1?
QUESTION [11 upvotes]: The trivial vector space over any field $K$, consisting of only the zero vector, admits exactly one endomorphism, let's call it $z$, sending $0$ to itself.
It is the identity map, so it should have determinant $1$.
On the face of it, the zero map should have determinant $0$. But this is usually argued via $\lambda z = z$ for all $\lambda \in K$, so $\det z = \det (\lambda z) = \lambda^n \det z$, i.e. $(\lambda^n - 1)\det z = 0$. Normally that's enough to conclude that $\det z = 0$, but of course $n = 0$ in this case, so $\lambda^n = 1$ for all $\lambda$, and we learn nothing.
Despite being the zero map, it's full rank and has trivial kernel.
There are no nonzero vectors, so it has no eigenvectors, so it has no eigenvalues, so their product is $1$.
On the other hand, the determinant is meant to be multilinear, and so should map the zero matrix to zero. But should we say that $z$ is represented by a zero matrix, given that its matrix representation is $0\times 0$ and doesn't have any entries at all?
I can't help but feel like this is all very silly, but clearly the answer can't be anything other than $1$. Is there anything wrong with giving this answer? Does it cause any problems with any other typical properties of the determinant? Does it simplify any definitions or theorems?
REPLY [3 votes]: I don't recall offhand any reference which discusses this issue but I'll add my two cents in support of taking $\det{f} = 1$ as the definition for the unique map $f \colon V \rightarrow V$ on a zero-dimensional space.
This definition is consistent with various standard theorems in linear algebra so that one doesn't need to exclude the zero dimensional case as an exception. In fact, I can't think of a single theorem which will become false taking $\det{f} = 1$ as the definition while most will break if you take $\det{f} = 0$ as the definition and don't exclude the zero dimensional case. For example,
An endomorphism on a finite dimensional vector space is invertible iff $\det(f) \neq 0$.
The characteristic polynomial of an operator $f \colon V \rightarrow V$ on a finite dimensional vector space is monic of degree $\dim V$ and a scalar $\lambda \in \mathbb{F}$ is a root of the characteristic polynomial iff $\lambda$ is an eigenvalue of $f$. The characteristic polynomial of $f$ is defined as $\chi_f(x) = \det(x \cdot \operatorname{id} - f)$ so using the "1" convention it becomes $\chi_f(x) = 1$ which is indeed monic of degree zero and doesn't have any roots. Using the $0$ convention gives $\chi_f(x) = 0$ which has degree $-\infty$ and all scalars as roots.
The minimal polynomial of $f$ divides the characteristic polynomial. Again, the minimal polynomial is the unique monic polynomial $m_f$ of minimal (non-zero) degree such that $m_f(f) = 0$. In the zero dimensional case, it becomes $1$ (indeed $m_f(f) = \operatorname{id}_V = 0$) and it divides the characteristic polynomial $1$. If the characteristic polynomial would be zero, this would be false.
The characteristic polynomial of the restriction of an operator $g$ to an $g$-invariant subspace divides the characteristic polynomial of $g$. Since $\{ 0 \}$ is a legitimate $g$-invariant subspace, it makes sense to take the characteristic polynomial of $g|_{\{0\}} = f$ to be $1$ and not $0$.
An orthogonal map on an inner product space should have determinant one.
The determinant of the identity map is $1$.
It might look silly but the idea of orienting a zero-dimensional vector space is important (for example, to state a general version of Stokes' theorem which includes the fundamental theorem of calculus as a special case). An orientation on a zero-dimensional real vector space is just a choice of $\pm 1$ which states whether the point is "positive or negative". The unique map on the zero-dimensional vector space does nothing (it is the identity map) so it should be orientation preserving and a map is orientation preserving iff $\det(f) > 0$.
Finally, let me say that in my mind, if one wants to define the determinant of the unique map $f \colon V \rightarrow V$ on the zero dimensional space, then the definition shouldn't depend on the choice of field $\mathbb{F}$ over which we are working on (this is more of a meta-mathematical statement). It would be ridiculous to define say $\det(f) = 2$ if $\mathbb{F} = \mathbb{R}$ while $\det(f) = 3$ if $\mathbb{F} = \mathbb{Z}_5$. Thus it leaves one with two sensible choices: $\det(f) = 0$ or $\det(f) = 1$. Since $\det(f) = 0$ breaks up so many theorems, it doesn't make sense to take it as the definition, better just leave it undefined and that's it.
Added: Almost any textbook which embraces the definition of the determinant via the exterior algebra will have as a result a definition for the zero dimensional case which will be $\det(f) = 1$. For such textbooks, it won't be a convention but a (trivial) result. For a specific example, see Algebra 1, Chapters 1-3 by Bourbaki. On page 525 they state that $\det([]) = 1$ but for them, this is not a convention or an ad hoc definition but a result proved from their definition of the determinant and the notion of a matrix.<|endoftext|>
TITLE: Ab*surd* Integrals
QUESTION [6 upvotes]: I am unable to find a proof for these integrals on the internet.
emphasized text
$$\displaystyle \int_0^{\frac{\pi}{2}} \cot^{-1}(\sqrt{1+\csc{\theta}}\,) \, \text{d}\theta = \frac{\pi^2}{12}$$
$$\displaystyle \int_0^\frac{\pi}{2} \csc^{-1}(\sqrt{1+\cot{\theta}}\,) \, \text{d}\theta = \frac{\pi^2}{8}$$
Sources: Brilliant, AoPS
I tried differentiating under the integral sign but I can't think of an appropriate parameter that leaves easily integrable rational functions.
I have tried exploiting the bounds to reflect and transform the integrand but to no avail.
A real solution is preferred but a complex solution is perfectly acceptable.
A geometric solution is not something I have considered but I'm just grasping at straws here.
REPLY [10 votes]: The second integral equals
$$ I_2=\int_{0}^{\pi/2}\arcsin\sqrt{\frac{\tan t}{1+\tan t}}\,dt=\int_{0}^{\pi/2}\arctan\sqrt{\tan t}\,dt=\int_{0}^{+\infty}\frac{\arctan\sqrt{u}}{1+u^2}\,du$$
and by splitting the last integration range as $(0,1)\cup(1,+\infty)$ and performing the substitution $u\mapsto\frac{1}{u}$ on the second part,
$$ I_2 = \int_{0}^{1}\frac{\arctan\sqrt{u}}{1+u^2}\,du+\int_{0}^{1}\frac{\frac{\pi}{2}-\arctan\sqrt{u}}{1+u^2}\,du = \frac{\pi}{2}\int_{0}^{1}\frac{du}{1+u^2}=\frac{\pi}{2}\cdot\frac{\pi}{4}=\color{red}{\frac{\pi^2}{8}}.$$
The first integral is
$$ I_1=\frac{\pi^2}{4}-\int_{0}^{\pi/2}\arctan\sqrt{\frac{1+\sin t}{\sin t}}\,dt=\frac{\pi^2}{4}-2\int_{0}^{\pi/4}\arctan\sqrt{\frac{1+\cos(2t)}{\cos (2t)}}\,dt$$
and
$$\int_{0}^{\pi/4}\arctan\sqrt{\frac{1+\cos(2t)}{\cos (2t)}}\,dt=\int_{0}^{1}\arctan\sqrt{\frac{2}{1-u^2}}\frac{du}{1+u^2}$$
is a variant of Ahmed's integral that can be tackled through differentiation under the integral sign: it is enough to be able to integrate $\frac{\sqrt{1-u^2}}{(1+a-u^2)(1+u^2)}$.<|endoftext|>
TITLE: Casimir Operator of $\mathfrak{sl}_n(\mathbb{C})$.
QUESTION [5 upvotes]: If I have the Lie algebra $\mathfrak{g} = \mathfrak{sl}_n(\mathbb{C})$ and the trace form $B(x,y) = \text{tr}(x,y)$ for $x, y \in \mathfrak{g}$ how does one calculate by which scalar the Casimir element $C \in Z(\mathscr{U}(\mathfrak{g}))$ acts on the highest weight module $V(\lambda)$?
I know that one defines the Casimir by $C = \sum_i x_i x_i^*$ where $\{x_i\}$ is a basis for the Lie algebra and $\{x_i^*\}$ a dual basis, which are arbitrary (i.e. C is independent of the choice of bases).
I have calculated by hand some simple examples (n=3 acts by the scalar $\frac{8}{3}$ for example on V(2)). How should one proceed in general?
Thanks.
REPLY [4 votes]: The way to calculate the action of $C$ is to make a smart choice of dual bases and then observe that it suffices to check the action of $C$ on a highest weight vector. (The fact that you use the trace form rather than the Killing form shows up only in an overall scaling of $C$.) Basically, you have to observe that the Cartan subalgebra $\mathfrak h$ is orthogonal to all root spaces with respect to $B$ while for two roots $\alpha$ and $\beta$, the restriction of $B$ to $\mathfrak g_{\alpha}\times\mathfrak g_{\beta}$ is non-zero if and only if $\beta=-\alpha$. First choose a basis $\{H_i\}$ of the space $\mathfrak h$ of diagonal matrices which is orthonormal with respect to $B$. Next, for $i
TITLE: Sum of all rationals between 0 and 1 squared
QUESTION [5 upvotes]: Yesterday I came with a question: if rational numbers are countable, that means that all rational numbers between 0 and 1 can be listed in a sequence. Let be $Q(n)$ that sequence. It is pretty clear that $\sum_{n=1}^{\infty}Q(n) >\sum_{n=1}^{\infty}\frac{1}{n}$, it diverges. But what about $\sum_{n=1}^{\infty}Q(n)^2$? Does this serie converge? Is there even a way to define $Q(n)$ in a precise way?
Many thanks in advance!!
REPLY [3 votes]: I will prove a little more general statement:
For all $\varepsilon>0$, the sum of all squared rational numbers between $0$ and $\varepsilon$ diverges.
Let's fix $\varepsilon >0$.
Let's denote by $(Q_{\varepsilon}(n))$ an enumeration of the rationnals between $0$ and $\varepsilon$.
Since $[\varepsilon/2,\varepsilon]\cap \mathbb Q$ is infinite (because $\varepsilon>0$), we can extract an infinite sub-sequence $(Q'_{\varepsilon}(n))$ of $(Q_{\varepsilon}(n))$ by conserving only the $Q_{\varepsilon}(n)$ such that $Q'_{\varepsilon}(n)\geqslant \frac{\varepsilon}2$.
We then have:
$$\sum_{n\in \mathbb N} Q_{\varepsilon}(n)^2\geqslant\sum_{n\in \mathbb N} Q'_{\varepsilon}(n)^2\geqslant \sum_{n\in \mathbb N} \frac{\varepsilon^2}4=+\infty.$$
So the original series diverges, and you can deduce your result from the case $\varepsilon =1$.<|endoftext|>
TITLE: Evaluating $\int \frac{1-7\cos^2x}{\sin^7x\cos^2x}dx$
QUESTION [6 upvotes]: How do i evaluate $$\int \frac{1-7\cos^2x}{\sin^7x\cos^2x}dx$$. I tried using integration by parts and here is my approach
$\int \frac{ sinx}{(1-cos^2x)^4\cos^2x} dx$ and then put $cos x=t$ and then tried to use partial fractions.I applied similar logic for the other part.But that made it lengthy to solve as decomposition into partial fractions is very time consuming.This question came in an objective examination in which time was limited.Can anyone help me with a shorter way to solve this problem.Thanks.
REPLY [8 votes]: Well, we know that:
$$\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}=\csc^7\left(x\right)\left(\sec^2\left(x\right)-7\right)\tag1$$
So, for the integral we get:
$$\int\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}\space\text{d}x=\int\csc^7\left(x\right)\sec^2\left(x\right)\space\text{d}x-7\int\csc^7\left(x\right)\space\text{d}x\tag2$$
Now, for the right integral you can use the reduction formula.
$\color{red}{\text{But}}$ using integration by parts:
$$\int\csc^7\left(x\right)\sec^2\left(x\right)\space\text{d}x=\csc^6\left(x\right)\sec\left(x\right)+7\int\csc^7\left(x\right)\space\text{d}x\tag3$$
So, we get that:
$$\int\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}\space\text{d}x=\csc^6\left(x\right)\sec\left(x\right)+\color{red}{7\int\csc^7\left(x\right)\space\text{d}x-7\int\csc^7\left(x\right)\space\text{d}x}\tag4$$
Which gives that:
$$\int\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}\space\text{d}x=\csc^6\left(x\right)\sec\left(x\right)+\text{C}\tag{5}$$<|endoftext|>
TITLE: Sum of random decreasing numbers between 0 and 1: does it converge??
QUESTION [153 upvotes]: Let's define a sequence of numbers between 0 and 1. The first term, $r_1$ will be chosen uniformly randomly from $(0, 1)$, but now we iterate this process choosing $r_2$ from $(0, r_1)$, and so on, so $r_3\in(0, r_2)$, $r_4\in(0, r_3)$... The set of all possible sequences generated this way contains the sequence of the reciprocals of all natural numbers, which sum diverges; but it also contains all geometric sequences in which all terms are less than 1, and they all have convergent sums. The question is: does $\sum_{n=1}^{\infty} r_n$ converge in general? (I think this is called almost sure convergence?) If so, what is the distribution of the limits of all convergent series from this family?
REPLY [51 votes]: The probability $f(x)$ that the result is $\in(x,x+dx)$ is given by $$f(x) = \exp(-\gamma)\rho(x)$$ where $\rho$ is the Dickman function as @Hurkyl pointed out below. This follows from the the delay differential equation for $f$, $$f^\prime(x) = -\frac{f(x-1)}{x}$$ with the conditions $$f(x) = f(1) \;\rm{for}\; 0\le x \le1 \;\rm{and}$$ $$\int\limits_0^\infty f(x) = 1.$$ Derivation follows
From the other answers, it looks like the probability is flat for the results less than 1. Let us prove this first.
Define $P(x,y)$ to be the probability that the final result lies in $(x,x+dx)$ if the first random number is chosen from the range $[0,y]$. What we want to find is $f(x) = P(x,1)$.
Note that if the random range is changed to $[0,ay]$ the probability distribution gets stretched horizontally by $a$ (which means it has to compress vertically by $a$ as well). Hence $$P(x,y) = aP(ax,ay).$$
We will use this to find $f(x)$ for $x<1$.
Note that if the first number chosen is greater than x we can never get a sum less than or equal to x. Hence $f(x)$ is equal to the probability that the first number chosen is less than or equal to $x$ multiplied by the probability for the random range $[0,x]$. That is, $$f(x) = P(x,1) = p(r_11$ in terms of $f(1)$.
First, note that when $x>1$ we have $$f(x) = P(x,1) = \int\limits_0^1 P(x-z,z) dz$$
We apply the compression again to obtain $$f(x) = \int\limits_0^1 \frac{1}{z} f(\frac{x}{z}-1) dz$$
Setting $\frac{x}{z}-1=t$, we get $$f(x) = \int\limits_{x-1}^\infty \frac{f(t)}{t+1} dt$$
This gives us the differential equation $$\frac{df(x)}{dx} = -\frac{f(x-1)}{x}$$
Since we know that $f(x)$ is a constant for $x<1$, this is enough to solve the differential equation numerically for $x>1$, modulo the constant (which can be retrieved by integration in the end). Unfortunately, the solution is essentially piecewise from $n$ to $n+1$ and it is impossible to find a single function that works everywhere.
For example when $x\in[1,2]$, $$f(x) = f(1) \left[1-\log(x)\right]$$
But the expression gets really ugly even for $x \in[2,3]$, requiring the logarithmic integral function $\rm{Li}$.
Finally, as a sanity check, let us compare the random simulation results with $f(x)$ found using numerical integration. The probabilities have been normalised so that $f(0) = 1$.
The match is near perfect. In particular, note how the analytical formula matches the numerical one exactly in the range $[1,2]$.
Though we don't have a general analytic expression for $f(x)$, the differential equation can be used to show that the expectation value of $x$ is 1.
Finally, note that the delay differential equation above is the same as that of the Dickman function $\rho(x)$ and hence $f(x) = c \rho(x)$. Its properties have been studied. For example the Laplace transform of the Dickman function is given by $$\mathcal L \rho(s) = \exp\left[\gamma-\rm{Ein}(s)\right].$$
This gives $$\int_0^\infty \rho(x) dx = \exp(\gamma).$$ Since we want $\int_0^\infty f(x) dx = 1,$ we obtain $$f(1) = \exp(-\gamma) \rho(1) = \exp(-\gamma) \approx 0.56145\ldots$$ That is, $$f(x) = \exp(-\gamma) \rho(x).$$
This completes the description of $f$.<|endoftext|>
TITLE: Bishop ML and pattern recognition calculus of variations linear regression loss function
QUESTION [5 upvotes]: On page $46$, there is
($1.87$) $E[L]=\int \int \{y(x)-t\}^2p(x,t)dxdt$
Calculus of variations is used to give
($1.88$) $\dfrac{\partial E[L]}{\partial{y(x)}} = $2$ \int \{y(x)-t\}p(x,t)dt = 0$
The reader is referred to appendix $D$ on calculus of variations, but I am still confused. How does one get from ($1.87$) to ($1.88$), step by step?
REPLY [11 votes]: Rename $\hat x$ as $x$, then interchange the order of integration, so that we integrate with respect to $x$ last. Then Equation (1.87) is
$$
\int\int[y(x)-t]^2p(x,t)\,dt\,dx
$$which is of the form
$$
\int G(y(x),y'(x),x)\,dx\tag{D.5}
$$
where
$$
G(y,y',x)=\int[y-t]^2p(x,t)\,dt.\tag{*}$$ By the Euler-Lagrange equations we require
$$
\frac{\partial G}{\partial y} -\frac d{dx}\left(\frac{\partial G}{\partial y'}\right)=0.\tag{D.8}
$$
In this case the function $G$ doesn't depend on $y'$ so the LHS of the Euler-Lagrange equations simplifies to
$$\frac{\partial G}{\partial y}=\int 2[y-t]p(x,t)\,dt,$$
obtained by differentiating (*) under the integral sign.<|endoftext|>
TITLE: Does "either" make an exclusive or?
QUESTION [16 upvotes]: This is a very "soft" question, but regarding language in logic and proofs, should
"Either A or B"
Be interpreted as "A or B, but not both"?
I have always avoided saying "either" when my intent is a standard, inclusive or, because saying "either" to me makes it feel like an exclusive or.
REPLY [8 votes]: In everyday speech, "or" is usually exclusive even without "either." In mathematics or logic though "or" is inclusive unless explicitly specified otherwise, even with "either."
This is not a fundamental law of the universe, it is simply a virtually universal convention in these subjects. The reason is that inclusive "or" is vastly more common.<|endoftext|>
TITLE: A specific example regarding the inscribed square problem
QUESTION [5 upvotes]: Toeplitz' conjecture (also called inscribed square problem) says that:
For every Jordan curve $\mathscr C$, there exists four distincts points $A$, $B$, $C$ and $D$ belonging to $\mathscr C$ such that $ABCD$ is a square.
A Jordan curve is a non self-intersecting continuous loop.
Here is a drawing to illustrate the situation, and a link to the Wikipedia page if you want to find out more about this conjecture.
The conjecture has already been proven in several cases, including when $\mathscr C$ is piecewise analytic.
So we know that for these two figures, there exists an inscribed square.
The question is how do I find those squares?
REPLY [5 votes]: There is. If we draw an orthogonal line with respect to the one that is aligned with the curve, then you can build the square.<|endoftext|>
TITLE: Partition edges of complete graph into paths of distinct length
QUESTION [6 upvotes]: Let $K_n$ be the complete undirected graph on $n$ vertices. Can you partition the edges of $K_n$ into $n-1$ paths of lengths $1,2,\ldots,n-1$ such that the edge-sets of the paths are pairwise disjoint?
I believe the statement to be true, but I cannot prove it. It also possible that this is an open problem.
REPLY [5 votes]: For odd $n \geq 5$, we can decompose $K_n$ into $(n-1)/2$ edge-disjoint $n$-cycles. These can be broken into paths of lengths $1,2,\ldots,n-1$ (break the first one into path lengths $1$ and $n-1$, the second one into $2$ and $n-2$, and so on). (This is called the Walecki decomposition.)
By deleting a vertex from the Walecki decomposition for $K_{n+1}$, we find: for even $n \geq 4$, we can decompose $K_n$ into $n/2$ edge-disjoint $(n-1)$-paths. These can be broken into paths of lengths $1,2,\ldots,n-1$ (leave one alone, break one into path lengths $1$ and $n-2$, the second one into $2$ and $n-3$, and so on).<|endoftext|>
TITLE: Is $\langle\ a,b\ \vert\ aba=bab,\ abab=baba\ \rangle$ a presentation of the free group on a single generator?
QUESTION [5 upvotes]: Is the following a presentation of the free group generated by a single element?
$\langle\ a,b\ \vert\ aba=bab,\ abab=baba\ \rangle.$
My thinking is the following:
$abab = baba=b(bab)=b^2ab$ by substituting the first relation into the second. Simplifying, we get $a=b$. Since these steps give equivalent statements, the above presentation is in fact $\langle\ a,b\ \vert\ a=b\ \rangle$, i.e., the free group on one generator.
Is this correct?
REPLY [6 votes]: Yes, this is correct.
One thing I would leave out is the $b^2ab$ step, for $a=b$ follows from $abab=b(bab)$ by cancelling $bab$ on the right.<|endoftext|>
TITLE: Distribution on $(0, \infty)$ which cannot be extended to $\mathbb{R}$
QUESTION [8 upvotes]: I am working on exercises from Friedlander's Introduction to the Theory of Distributions and I am stuck in a particular problem.
The question is: "Show that $\langle u, \phi\rangle = \sum_\limits{k \geq 1} \partial^k \phi(1/k)$ is a distribution on $(0, \infty)$, but that there is no $v\in \mathcal{D}'(\mathbb{R})$ whose restriction to $(0, \infty)$ is equal to $u$."
I believe I have managed to prove the first part: Given any compact $K \subset (0,\infty)$, take a test function $\phi\in C^{\infty}_c(0, \infty)$ with $\operatorname{supp} \phi \subset K$. Take then $N$ such that $\frac{1}{N+1} < \min \operatorname{supp} \phi$. We have that $\langle u, \phi\rangle = \sum_\limits{k = 1}^N \partial^k \phi(1/k)$, since $\phi(1/k) = 0, \forall k>N+1$. And so it is clear that $\exists C$ and $\exists N$ such that $u$ satisfies the seminorm estimates $|\langle u, \phi\rangle| \leq \sum\limits_{k=1}^N |\sup\partial^k \phi|$ for any $\phi$.
Now, the second part is troubling me. I believe the way is to suppose there is a distribution $v\in \mathcal{D}'(\mathbb{R})$ with $v|_{(0,\infty)} = u$, and show that it would not satisfy the seminorm estimate because of the restriction. However, I am struggling to see how that should be done.
Note: I have recognized this distribution to be equivalent to $\sum\limits_{k\geq 1} \delta^{(k)}(x-1/k)$ but I am not sure how this helps!
REPLY [3 votes]: To close the question, Willie Wong's suggestion was to choose a test function which is equal to $\exp(x)$ within $\{x:|x|<1\}$. Then $\langle u, \phi\rangle = \sum\limits_{k\geq 1} \exp(1/k) > \sum\limits_{k\geq 1} \exp(0)$ which is unbounded as $k\to\infty$. And so $|\langle u, \phi\rangle|$ cannot be bound by seminorm estimates for our chosen $\phi$.<|endoftext|>
TITLE: Are groups ordered pairs or sets?
QUESTION [6 upvotes]: Some books say stuff like "if $\forall x \in G$, $x^2=e$, then $G$ is abelian". But the notation of a group is $\langle G, \circ \rangle$, and that looks like an ordered pair. So, should not the elements of the group be $\{G\}$ and $\{\circ,G\}$ by definition of ordered pair? Or am i getting wrong the notation?
I got this doubt also with the notation of partially ordered sets.
REPLY [14 votes]: Yes, a group is an ordered pair: the first element of the pair is a set (the underlying function of the group), and the second is a binary function on that set (which, in set theory, is actually a set too).
Saying something like "$G$ is abelian" is an abuse of notation: technically it's incorrect, but it has only one reasonable interpretation (this is only true btw if we aren't considering two different group structures on the same set, which we sometimes do). It's used because it's slightly easier to write than "$(G, \circ)$ is abelian."
Incidentally, given that "$e$" isn't actually part of the tuple $(G, \circ)$, that's also an abuse of notation - one should write $e_G$ (to distinguish it from the identity of some other group) or similar. But, again, we can get away with it in contexts where it won't lead to confusion. Also, it's worth pointing out that many texts treat groups as ordered triples of the form $(G, \circ, e)$.
REPLY [9 votes]: Often, in maths you encounter structures that are sets plus some structure on that set. For example with groups, you are dealing with a set $G$ and some operation $\cdot: G \times G \to G$ on this set. In topology, you have a set $X$ and a topology $\tau$ on this set. In measure theory you have a set $X$, a $\sigma$-algebra on $X$, denoted $\Gamma$ and a measure on $\Gamma$ denoted $\mu$.
In all of these cases you can describe the thing properly by providing the set and the structure together: a group is $(G,\cdot)$, a topological space is $(X, \tau)$ and a measure space is $(X, \Gamma, \mu)$. I think this is what your notation is about.
REPLY [5 votes]: This is technically an abuse of language. Conflating a tuple $(A,\ldots)$ defining a set with structure with the set $A$ itself extremely common and I don't remember hearing anyone ever complain about it. This is because the ordered tuple construction is quite artificial and is not the only way to associate a set with the structures we put on it.
For the example of groups, we could instead define a group as an object in the category $\mathbf{Grp}$ of all groups. Now this really is a set. The algebraic structure is then hidden in the morphisms of the category.<|endoftext|>
TITLE: Convex sets as intersection of half spaces
QUESTION [11 upvotes]: I want to prove that any closed convex sets can be written as an intersection of half spaces using only the separation theorem as a pre-requisite. I'm getting a feel that I need to show two sets are subsets of each other, but not being able to understand how exactly to go about it.
REPLY [12 votes]: I think that your approach should work.
Let $C\subseteq \mathbb{R}^n$ be a closed, convex set. Let $\mathcal{H}$ be the collection of closed, half-spaces that contain $C$. You would like to show that
$$C = \bigcap_{H\in \mathcal{H}}H.$$
First we can show $C \subseteq \bigcap_{H\in \mathcal{H}}H.$ Let $x\in C$. By the definition of $\mathcal{H}$, any $H\in \mathcal{H}$ satisfies $C\subseteq H$. Hence $x\in H$ for any $H\in \mathcal{H}$ and therefore $x\in \bigcap_{H\in \mathcal{H}}H.$ This gives us the desired inclusion.
It is left to show that $C \supseteq \bigcap_{H\in \mathcal{H}}H.$ We prove this using the contrapositive, that is we will show that if $x\not\in C$ then $x\not\in \bigcap_{H\in \mathcal{H}}H.$ So choose $x$ such that $x\not\in C$. Since $C$ is closed and convex, there is a hyperplane that strictly separates $x$ from $C$. This hyperplane defines a half space $H$ containing $C$. Hence $x\not\in H$ implying that $x\not\in \bigcap_{H\in \mathcal{H}}H$. This proves the desired inclusion.<|endoftext|>
TITLE: Continuity in a compact metric space.
QUESTION [7 upvotes]: Let $(X,d)$ be a compact metric space and let $f, g: X \rightarrow \mathbb{R}$ be continuous such that $$f(x) \neq g(x), \forall x\in X.$$
Show that there exists an $\epsilon$ such that $$|f(x) - g(x)| \geq \epsilon, \forall x \in X.$$
I'm assuming he means $\epsilon > 0$. Well, suppose to the contrary that for all $\epsilon > 0$, there exists an $x' \in X$ such that $|f(x') - g(x')| < \epsilon.$ Since $f(x')$ and $g(x')$ are fixed values, we must have $f(x') = g(x')$, a contradiction.
Seems uh... too easy? I didn't even have to use continuity or compactness? So seems wrong? (I'm really sick, so terrible at math this week, but is this right?)
REPLY [6 votes]: The problem with your proof is that you cannot fix $x'$ and vary $\epsilon$. This is because $x'$ is conditioned on your given $\epsilon$.
As for a correct solution note that $|f(x) - g(x)|$ is a continuous function from $X$ to $\mathbb{R}$. What do you know about the minimum of a continuous function from a compact space to $\mathbb{R}$?
REPLY [3 votes]: Denying we have: given $k > 0$ there exist's $x_k$ such that $|f(x_k) - g(x_k)| < \frac{1}{k}.$
Make $x = (x_k).$ Since $M$ is compact, we have that $x$ must have a convergent subsequence to some $y \in M$. Then passing to the subsequence we must have, using continuity of $f,$ and $g$, that $g(y) = f(y)$. What is a contradiction.<|endoftext|>
TITLE: When does $2^n-1$ divide $3^n-1$?
QUESTION [7 upvotes]: Is it possible for some integer $n>1$ that $2^n-1\mid 3^n-1$ ?
I have tried many things, but nothing worked.
REPLY [6 votes]: I was looking for this as well, and eventually figured it out myself. So here's my solution for future reference. The short answer is, $2^n - 1$ never divides $3^n - 1$. Here's the proof, making use of the Jacobi symbol.
Assume $2^n - 1 \mid 3^n - 1$. If $n = 2k$ is even, then $2^n - 1 = 4^k - 1 \equiv 0 \bmod 3$. Consequently, $3$ must also divide $3^n - 1$, which is a contradiction. At the very least, we can already assume $n = 2k + 1$ is odd. Next, since $3^n \equiv 1 \bmod 2^n - 1$, from the properties of the Jacobi-symbol it follows that
\begin{equation}
1 = (\frac{1}{2^n - 1}) = (\frac{3^n}{2^n - 1}) = (\frac{3^{2k}}{2^n - 1}) \cdot (\frac{3}{2^n - 1}) = (\frac{3}{2^n - 1})
\end{equation}
However, using Jacobi's law of reciprocity we also know
\begin{equation}
(\frac{2^n - 1}{3}) = (\frac{3}{2^n - 1}) \cdot (\frac{2^n - 1}{3}) = (-1)^{\frac{3 - 1}{2}\frac{2^n - 2}{2}} = (-1)^{2^{n - 1} - 1} = -1
\end{equation}
The only quadratic non-residue $\bmod 3$ is $2$, therefore $2^n - 1 \equiv 2 \bmod 3$ or alternatively $2^n \equiv 0 \bmod 3$. Since this implies $3$ divides $2^n$, we again arrive at a contradiction.<|endoftext|>
TITLE: Double Quotienting of a Ring is Isomorphic to Ring Quotient by Sum of Ideals
QUESTION [6 upvotes]: First let me say, please edit my title if their is a more appropriate one and if this is a duplicate please direct me and close the question. I tried searching for my question, but I don't know if the theorem I am trying to prove has a name - so it was difficult to know what to search.
Also let me say that I have already tried to squeeze this theorem out of one of the isomorphism theorems, but I cant quite see how to get it, so if your hint or answer is `check the isomorphism theorems for rings' I might need more then that. I would like to understand this for personal benefit, but I am kind of limited on time.
I am specifically working with $A$ as a commutative ring with unity, $\mathfrak{a}$ is an ideal of $A$, and I believe $\mathfrak{b}$ is to be taken as an ideal in $A/\mathfrak{a}$, and $\mathfrak{b}'$ is the ideal in $A$ that corresponds to $\mathfrak{b}$. Then I would like to show that
$$A/\mathfrak{a}/\mathfrak{b} \approx A/(\mathfrak{a} + \mathfrak{b}').$$
I tried work through the mechanics of a specific example in full formality for insight, specifically I investigated $$\mathbb{Z}/<12>/<3+<12>>$$ where <12> is my ideal generated by 12 in $\mathbb{Z}$, and <3+ <12>> is my ideal generated by the coset with representative 3 in $\mathbb{Z}/<12>$. Sorry for the horrible notation here. I am aware this is probably unusually formal approach to the "coset" approach to quotienting, But I want to make sure I understand the nuts and bolts before passing off to theorems and the more "homomorphic image" approach to quotienting. Anyways
In this example we get $\mathbb{Z}/<12>/<3+<12>>$ = $$\{ <0+<12>>, <1 + <12>>, <3+<12>> \}$$
Which should be easily isomorphically mapped to $\mathbb{Z}/<12>+<3>$ since that sum of ideals is just <12>+<3> = <3>.
If you have any advice or direction please let me know.
REPLY [4 votes]: I have already tried to squeeze this theorem out of one of the isomorphism theorems, but I cant quite see how to get it...
Well, it's almost exactly the third isomorphism theorem, so it's a little strange you didn't succeed!
Perhaps the difficulty lay in understanding the relationship of $\mathfrak b$ to $\mathfrak b'$. The corresponding ideal is just an ideal $\mathfrak b'$ of $A$ which contains $\mathfrak a$, such that $\mathfrak b=\frac{\mathfrak b'}{\mathfrak a}$. With that said... is there any reason to write $\mathfrak b$ anymore? Perhaps not.
The third isomorphism theorem says that $\frac{A}{\mathfrak a}/\frac{\mathfrak b'}{\mathfrak a}\cong\frac{A}{\mathfrak b'}$. You shouldn't have to write $\mathfrak a +\mathfrak b'$ because $\mathfrak a+\mathfrak b'=\mathfrak b'$.<|endoftext|>
TITLE: Determine whether or not $\exp\left(\sum_{n=1}^{\infty}\frac{B(n)}{n(n+1)}\right)$ is a rational number
QUESTION [5 upvotes]: Let $B(n)$ be the number of ones in the base 2 expression for the positive integer n.
Determine whether or not $$\exp\left(\sum_{n=1}^{\infty}\frac{B(n)}{n(n+1)}\right)$$ is a rational number.
Attempt:
I tried to make the sum into something that resembles the power series of log, that way it would be easier to determine whether this number is rational. But I have no idea how to deal with $B(n)$.
Thanks in advance!
REPLY [3 votes]: For $B(n)$ we have the following properties:
\begin{align}
& B(2k) = B(k) & \text{if }n = 2k+1 \\
& B(2k + 1) = B(k) + 1 & \text{if }n = 2k
\end{align}
Hence,
$$S = \sum\limits_{n=1}^{+\infty} \dfrac{B(n)}{n(n+1)} = \sum\limits_{k=0}^{+\infty} \dfrac{B(2k+1)}{(2k+1)(2k+2)} + \sum\limits_{k=0}^{+\infty} \dfrac{B(2k + 2)}{(2k + 2)(2k+3)} = \sum\limits_{k=0}^{+\infty} \dfrac{B(k) + 1}{(2k+1)(2k+2)} + \sum\limits_{k=0}^{+\infty} \dfrac{B(k + 1)}{(2k + 2)(2k+3)} = \sum\limits_{k=0}^{+\infty} \dfrac{1}{(2k+1)(2k+2)} + \sum\limits_{k=0}^{+\infty} B(k + 1)\left(\dfrac{1}{(2k + 2)(2k+3)} + \dfrac{1}{(2k + 3)(2k+4)}\right) = \ln{2} + \sum\limits_{k=1}^{+\infty} B(k)\left(\dfrac{4k + 2}{2k(2k+1)(2k+2)}\right) = \ln{2} +
\dfrac{1}{2}\sum\limits_{n=1}^{+\infty} \dfrac{B(k)}{k(k+1)} = \ln{2} + \dfrac{1}{2}S \Rightarrow S = 2\ln{2} = \ln{4}$$
Thus, $\exp\left\{\sum\limits_{n=1}^{+\infty} \dfrac{B(n)}{n(n+1)} \right\} = 4$, which is a rational number.
Note that manipulations with series above are legal, because $B(n) \le 1 + [\log_2(n)]$, so $\dfrac{B(n)}{n(n+1)} \le \dfrac{1}{n^{3/2}}$ for large enough $n$, which means that the series in question converges.<|endoftext|>
TITLE: Closed form for $\int x^ne^{-x^m} \ dx\ ?$
QUESTION [10 upvotes]: While entertaining myself by answering a question, the following problem arose.
For what natural numbers $n,m$ does the following undefined integral have a closed form
$$\int x^ne^{-x^m} \ dx\ ?$$
Closed form means that the antiderivative consists only of powers of $x^{...}$ and $x$ in $e^{-x^{...}}$.
I created the following matrix showing for different pairs of $n$ and $m$ the nature of the antiderivative.
$$\begin{matrix}
& m&1&2&3&4&5&6&7\\
n\\
1&&\checkmark&\checkmark&\Gamma&\text{erf}&\Gamma&\Gamma&\Gamma\\
2&&\checkmark&\text{erf}&\checkmark&\Gamma&\Gamma&\text{erf}&\Gamma&\\
3&&\checkmark&\checkmark&\Gamma&\checkmark&\Gamma&\Gamma&\Gamma&\\
4&&\checkmark&\text{erf}&\Gamma&\Gamma&\checkmark&\Gamma&\Gamma\\
5&&\checkmark&\checkmark&\checkmark&\text{erf}&\Gamma&\checkmark&\Gamma\\
6&&\checkmark&\text{erf}&\Gamma&\Gamma&\Gamma&\Gamma&\checkmark\\
7&&\checkmark&\checkmark&\Gamma&\checkmark&\Gamma&\Gamma&\Gamma\\
\end{matrix}$$
$$$$
The $\checkmark$ sign stands for a closed form, "erf" signals that the antiderivative contains the erf function , and $\Gamma$ signals that the antiderivative contains the upper incomplete $\Gamma$ function.
I have no clue. Does anybody?
REPLY [5 votes]: Let's start with a simple substitution $x=u^{1/m}$. This gives us
$$I=\frac1m\int u^{(n+1)/m-1}e^{-u}\ du=\frac1m\gamma\left(\frac{n+1}m,x^m\right)+c$$
This trivially has closed forms for $\frac{n+1}m\in\mathbb N$ due to integration by parts. Indeed, checking your table, it corresponds with every checkmark perfectly.
And just for the record, when $k\in\mathbb N$,
$$\int x^ke^{-x}\ dx=-e^{-x}\sum_{n=0}^k(k-n)!x^n+c$$<|endoftext|>
TITLE: Transformation of Random Variable $Y = X^2$
QUESTION [10 upvotes]: I'm learning probability, specifically transformations of random variables, and need help to understand the solution to the following exercise:
Consider the continuous random variable $X$ with probability density function $$f(x) = \begin{cases} \frac{1}{3}x^2 \quad -1 \leq x \leq 2, \\ 0 \quad \quad \text{elsewhere}. \end{cases}$$ Find the cumulative distribution function of the random variable $Y = X^2$.
The author gives the following solution:
For $0 \leq y \leq 1: F_Y(y) = P(Y \leq y) = P(X^2 \leq y) \stackrel{?}{=} P(-\sqrt y \leq X \leq \sqrt y) = \int_{-\sqrt y}^{\sqrt y}\frac{1}{3}x^2\, dx = \frac{2}{9}y\sqrt y.$
For $1 \leq y \leq 4: F_Y(y) = P(Y \leq y) = P(X^2 \leq y) \stackrel{?}{=} P(-1 \leq X \leq \sqrt y) = \int_{-1}^{\sqrt y}\frac{1}{3}x^2\, dx = \frac{1}{9} + \frac{1}{9}y\sqrt y.$
For $y > 4: F_{Y}(y) = 1.$
Previous to this exercise, I've managed to follow the solutions of two similar (obviously simpler) problems for a strictly increasing and strictly decreasing function of $X$, respectively. However in this problem, I don't understand the computations being done, specifically:
How does the three intervals $0 \leq y \leq 1$, $1 \leq y \leq 4$ and $y > 4$ are determined? In the two previous problems I've encountered, we only considered one interval which was identical to the interval where $f(x)$ was non-zero.
In the case where $0 \leq y \leq 1$, why does $P(X^2 \leq y) = P(-\sqrt y \leq X \leq \sqrt y)$ and not $P(X \leq \sqrt y)$? I have put question marks above the equalities that I don't understand.
I think I have not understand the theory well enough. I'm looking for an answer that will make me understand the solution to this problem and possibly make the theory clearer.
REPLY [10 votes]: Let's start by seeing what the density function $f_X$ of $X$ tells us about the cumulative distribution function $F_X$ of $X$. Since $f_X(x) = 0$ for $-\infty < x < -1$, we see that
$$F_X(x) = \int_{-\infty}^x f_X(t) \, dt \equiv 0 $$
in this range. Similarly, since $f_X(x) = 0$ in the range $2 < x < \infty$, we see that
$$F_X(x) = \int_{-\infty}^x f_X(t) \, dt = \int_{-\infty}^{\infty} f_X(t) \, dt \equiv 1$$
in this range. In other words, the random variable is "supported on the interval $[-1,2]$" in the sense that $P(X \notin [-1,2]) = 0$.
Now let us consider $Y = X^2$. This variable is clearly non-negative and since $X$ is supported on $[-1,2]$, we must have that $Y$ is supported on $[0, \max((-1)^2,2^2)] = [0,4]$. This is intuitively clear because the variable $X$ (with probability $1$) takes values in [-1,2] and so $X^2$ takes values in $[0,\max((-1)^2,(2)^2)]$. So we only need to understand $F_Y(y)$ in the range $y \in [0,4]$. Now, we always have
$$ F_Y(y) = P(Y < y) = P(X^2 < y) = P(-\sqrt{y} < X < \sqrt{y}) = \int_{-\sqrt{y}}^{\sqrt{y}} f_X(t) \, dt $$
but since $f_X$ is defined piecewise, to proceed at this point we need to analyze several cases. We already know that $F_Y(y) = 0$ if $y \leq 0$ and $F_Y(y) = 1$ if $y \geq 4$.
If $0 \leq y \leq 1$ then $[-\sqrt{y},\sqrt{y}]$ is contained in $[-1,1]$ and on $[-1,1]$ the density function is $f_X(x) = \frac{1}{3}x^2$ so we can write
$$ F_Y(y) = \int_{-\sqrt{y}}^{\sqrt{y}} \frac{1}{3} t^2 \, dt. $$
However, if $1 < y \leq 4$ then $-\sqrt{y} < -1$ and so the interval of integration splits as $[-\sqrt{y}, -1] \cup [-1,\sqrt{y}]$. Over the left $[-\sqrt{y},-1]$ part, the density function is zero so the integal will be zero and we are left only with calculating the integral over the right part:
$$ F_Y(y) = \int_{-\sqrt{y}}^{-1} f_X(t) \, dt + \int_{-1}^{\sqrt{y}} f_X(t) \, dt = \int_{-1}^{\sqrt{y}} \frac{1}{3}t^2 \, dt. $$<|endoftext|>
TITLE: find total number of maximal ideals in $\mathbb{Q}[x]/\langle x^4-1\rangle$.
QUESTION [10 upvotes]: find total number of maximal ideals in $\mathbb{Q}[x]/\langle x^4-1\rangle$.
Let $J=\langle x^4-1\rangle$, $R=\mathbb{Q}[x]$. I want to use $(R/J)/(I/J)\simeq R/I$, where $I $ is ideal of $R$ which contain $J$. Then $R/I$ is field, and $R$ is a principal ideal domain. Let $I=\langle f(x) \rangle$ hence $f(x)$ must be irreducible in $R$, so only choice for $f(x)$ are $x-1,x+1,x^2+1$.
So answer should be $3$. Is it right explanation? and better method
thanks in advance
REPLY [3 votes]: That is all correct. (To get this out of the unanswered queue.)<|endoftext|>
TITLE: What are the group homomorphisms from $ \prod_ {n \in \mathbb {N}} \mathbb {Z} / \bigoplus_ {n \in \mathbb {N}} \mathbb {Z} $ to $ \mathbb {Z} $?
QUESTION [8 upvotes]: By a theorem of Specker, there’s only the zero map since any map out of $ \prod_{n \in \mathbb{N}} \mathbb{Z} $ is determined by the values of the unit vectors, which all lie in $ \bigoplus_{n \in \mathbb{N}} \mathbb{Z} $, but the original proof is more general, uses a bunch of machinery, and in German. Isn’t there an easier way?
REPLY [6 votes]: There is a nice quick proof. I'm not sure who the proof is due to.
The statement is equivalent to
If $P$ is the group of sequences ${\bf a}=(a_0,a_1,\dots)$ of integers, and $f:P\to\mathbb{Z}$ is a homomorphism that vanishes on finite sequences (so that $f({\bf a})=f({\bf b})$ whenever ${\bf a}$ and ${\bf b}$ differ in only finitely many places), then $f=0$.
Suppose $f:P\to\mathbb{Z}$ is a homomorphism that vanishes on finite sequences.
For any ${\bf a}\in P$, we can write $a_n=b_n+c_n$, where $b_n$ is divisible by $2^n$ and $c_n$ is divisible by $3^n$.
Then for each $n$, ${\bf b}$ differs in only finitely many places from a sequence divisible by $2^n$, so $f({\bf b})$ is divisible by $2^n$ for all $n$, and so $f({\bf b})=0$. Similarly $f({\bf c})=0$, and so $f({\bf a})=f({\bf b}+{\bf c})=0$.<|endoftext|>
TITLE: How to disprove that every odd number can be written in the form $2^n + p$ with $p$ prime?
QUESTION [8 upvotes]: How can I disprove that every odd number, $2k+1>1$
can be written in the form $2k+1 = 2^n + p$ with $p$ prime?
I know it's not true but I don't know how to explain that it is not true.
REPLY [11 votes]: It suffices to find a counterexample.
After some searching, we find A133122
Odd numbers which cannot be written as the sum of an odd prime and a power of two
$$1, 3, 127, 149, 251, 331, 337, 373, 509, 599, 701,\dots$$
Even allowing $n=0$ and the use of even primes to say $3=2^0+2$ and ignoring $1$, the smallest counterexample is apparently $127$.
To prove that $127$ is in fact a counterexample, note that $127 = 64+3^2\cdot 7 = 32+5\cdot 19 = 16 + 3\cdot 37 = 8+7\cdot 17=\dots$ and so no power of two is valid.
These numbers are named Obstinate Numbers.<|endoftext|>
TITLE: Prove that the center of G has order divisible by p
QUESTION [8 upvotes]: Let $G$ be a non-trivial finite group and $p$ a prime number. If every subgroup $H\leq$ G has index divisible by $p$, prove that the center of $G$ has order divisible by $p$.
So I have that $[G:H]=pk$ for some integer $k$, and we need to prove that $|Z(G)|=pl$ for some integer $l$. Let $|G|=n$, I can prove the case if assume $G$ is an abelian group, then $|Z(G)|=n$, so the center has order divisible by $p$.
How should I approach if $G$ is not abelian?
REPLY [16 votes]: Note that the class equation of a finite group $G$ is
$|G|=|Z(G)|+\sum_{i=1}^n |cl(a_i)|\implies |Z(G)|=|G|-\sum_{i=1}^n |cl(a_i)|$
where $a_i$ are the distinct class representatives.
Now $|cl(a_i)|=\dfrac{|G|}{|C(a_i)|}$ where $C(a_i)=\{x\in G:xa_i=a_ix\}$.
Note that $C(a_i)$ is a subgroup of $G$ for each $1\leqslant i\leqslant n.$
Since every subgroup has index divisible by $p$ , hence $p\mid |cl(a_i)|$ for all $1\leqslant i\leqslant n.$
Also for any subgroup $H$ of $G$ we have $|G|=|{\dfrac{G}{H}}||H|$ ,
$p \mid \dfrac{|G|}{|H|}\implies p\mid |G|$
Since the RHS is divisible by $p$ so is the LHS.<|endoftext|>
TITLE: The rate of convergence of Cesaro average of Fourier series
QUESTION [5 upvotes]: Do you know any estimates of rate of convergence of Cesaro average of Fourier series? It does not matter for which classes of functions. It would be great if you can give some estimates depending on the smoothness of the function.
It is well known that Cesaro average convergence uniformly for all continuous functions. Also for example there well known estimate for Fourier series (not Cesaro average) that looks like $O(\frac{\log n}{n^p})$ where $p$ is smoothness of the functions. I would like to know some analogical results for Cesaro sums.
Great thanks for any links, papers, books and so on!
REPLY [6 votes]: There is a paper of R. Bojanic and S.M. Mazhar, An estimate of the rate of
convergence of the Norlund.-Voronoi means of the Fourier series of functions
of bounded variation, Approx. Theory III, Academic Press (1980), 243-248
It says that if
$f:[-2\pi,2\pi]\rightarrow\mathbb{R}$ is $2\pi$-periodic and of bounded
variation and $S_{n}(f,x)$ is the partial sum of its Fourier series, then
$$
\left\vert \frac{1}{n}\sum_{k=1}^{n}S_{k}(f,x)-\frac{1}{2}(f_{+}%
(x)+f_{-}(x))\right\vert \leq\frac{c}{n}\sum_{k=1}^{n}\operatorname*{Var}%
\nolimits_{[0,\frac{\pi}{k}]}g_{x},
$$
where for every fixed $x\in\lbrack-2\pi,2\pi]$, $g_{x}(t):=f(x+t)+f(x-t)-f_{+}%
(x)-f_{-}(x)$ for $t\neq0$ and $g_{x}(0):=0$. Here $f_{+}(x)$ and $f_{-}(x)$
are the left and right limits.
In particular, if $f$ is piecewise $C^{1}$,
then
\begin{align*}
\operatorname*{Var}\nolimits_{\lbrack0,\frac{\pi}{k}]}g_{x} & =\int%
_{0}^{\frac{\pi}{k}}|g_{x}^{\prime}(t)|\,dt=\int_{0}^{\frac{\pi}{k}}%
|f^{\prime}(x+t)-f^{\prime}(x-t)|\,dt\\
& \simeq|f_{+}^{\prime}(x)-f_{-}^{\prime}(x)|\frac{1}{k}%
\end{align*}
and so
$$
\frac{c}{n}\sum_{k=1}^{n}\operatorname*{Var}\nolimits_{[0,\frac{\pi}{k}]}%
g_{x}\simeq|f_{+}^{\prime}(x)-f_{-}^{\prime}(x)|\frac{c}{n}\sum_{k=1}^{n}%
\frac{1}{k}\simeq|f_{+}^{\prime}(x)-f_{-}^{\prime}(x)|\frac{\log n}{n}.
$$
If $f_{+}^{\prime}(x)=f_{-}^{\prime}(x)$ and $f$ is piecewice $C^{2}$, then
$$
\int_{0}^{\frac{\pi}{k}}|f^{\prime}(x+t)-f^{\prime}(x-t)|\,dt\simeq
|f_{+}^{\prime\prime}(x)-f_{-}^{\prime\prime}(x)|\frac{1}{k^{2}}%
$$
and so
$$
\frac{c}{n}\sum_{k=1}^{n}\operatorname*{Var}\nolimits_{[0,\frac{\pi}{k}]}%
g_{x}\simeq|f_{+}^{\prime\prime}(x)-f_{-}^{\prime\prime}(x)|\frac{c}{n}%
\sum_{k=1}^{n}\frac{1}{k^{2}}.
$$<|endoftext|>
TITLE: Why is important for a manifold to have countable basis?
QUESTION [18 upvotes]: I've seen there are few questions similar, but I haven't seen anyone so precise or with a good answer.
I'd like to understand the reason why we ask in the definition of a manifold the existence of a countable basis. Does anybody has an example of what can go wrong with an uncountable basis? When does the problem arise? Does it arise when we want to differentiate something or does it arise before?
Thank You
REPLY [24 votes]: There is one point that is mentioned in passing in Moishe Cohen's nice answer that deserves a bit of elaboration, which is that a lot of the time it is not important for a manifold to have a countable basis. Rather, what is important in most applications is for a manifold to be paracompact: this is what gives you partitions of unity, which are essential to an enormous amount of the theory of manifolds (for instance, as the other answer mentioned, proving that any manifold admits a Riemannian metric).
Paracompactness follows from second-countability, which is the main reason why second-countability is useful. Paracompactness is weaker than second-countability (for instance, an uncountable discrete space is paracompact), but it turns out that it isn't weaker by much: a (Hausdorff) manifold is paracompact iff each of its connected components is second-countable. To put it another way, a general paracompact manifold is just a disjoint union of (possibly uncountably many) second-countable manifolds. So if you care mainly about connected manifolds (or even just manifolds with only countably many connected components), you lose no important generality by assuming second-countability rather than paracompactness.
There are also a few situations where it really is convenient to assume second-countability and not just paracompactness. For instance, in the theory of Lie groups, it is convenient to be able to define a (not necessarily closed) Lie subgroup of a Lie group $G$ as a Lie group $H$ together with a smooth injective homomorphism $H\to G$. If you allowed your Lie groups to not be second-countable, you would have the awkward and unwanted example that $\mathbb{R}$ as a discrete space is a Lie subgroup of $\mathbb{R}$ with the usual $1$-dimensional smooth structure (via the identity map). For instance, this example violates the theorem (true if you require second-countability) that a subgroup whose image is closed is actually an embedded submanifold.<|endoftext|>
TITLE: Problems understanding proof of if $x + y = x + z$ then $y = z$ (Baby Rudin, Chapter 1, Proposition 1.14)
QUESTION [20 upvotes]: I'm having trouble with whether Rudin actually proves what he's tried to prove.
Proposition 1.14; (page 6)
The axioms of addition imply the following statements:
a) if $x + y = x + z$ then $y = z$
The author's proof is as follows:
$ y = (0 + y) = (x + -x) + y = -x + (x + \textbf{y})$
$$ = -x + (x + \textbf{z}) = (-x + x) + z = (0 + z) = z $$
I emphased the section which troubles me.
How does Rudin prove that $ y = z $ if he substituted $y = z$?
REPLY [53 votes]: He didn't substitute $z$ for $y$; rather, he substituted $x+z$ for $x+y$. This is legitimate based on the assumption that $x+y = x+z$.<|endoftext|>
TITLE: Triangular and Fibonacci numbers: $\sum_{k=0}^{2n}T_{2n-k}\color{red}{F_k^2}=F_{2n}F_{2n+1}-n$
QUESTION [6 upvotes]: Well known Fibonacci square series $(1)$
$$0^2+1^2+1^2+2^2+3^2+\cdots F_{n}^2=F_{n}F_{n+1}\tag1$$
$T_n=0,1,3,6,10,..$ and $F_n=0,1,1,2,3,...$ For $n=0,1,2,3,...$
Now we included Triangular numbers into $(1)$ as shown below
$$T_0F_0^2=F_1F_2-1$$
$$T_1F_0+T_0F_1=F_2F_3-1$$
$$T_2F_0^2+T_1F_1^2+T_0F_2^2=F_3F_4-2$$
$$T_3F_0^2+T_2F_1^2+T_1F_2^2+T_0F_3^2=F_4F_5-2$$
$$T_4F_0^2+T_3F_1^2+T_2F_2^2+T_1F_3^2+T_0F_4^2=F_5F_6-3$$
$$T_5F_0^2+T_4F_1^2+T_3F_2^2+T_2F_3^2+T_1F_4^2+T_0F_5^2=F_6F_7-3$$
Observing the series involving Triangular and Fibonacci numbers together we found the following closed form.
For even terms
$$\sum_{k=0}^{2n+1}T_{2n+1-k}\color{red}{F_k^2}=F_{2n+1}F_{2n+2}-n-1\tag2$$
For odd terms
$$\sum_{k=0}^{2n}T_{2n-k}\color{red}{F_k^2}=F_{2n}F_{2n+1}-n\tag3$$
How can we prove $(2)$ and $(3)$?
An attempt:
Knowing that $T_n={n(n+1)\over 2}$ then $(3)$ becomes
$${1\over 2}\sum_{k=0}^{2n}(2n-k)(2n-k+1)F_k^2=F_{2n}F_{2n+1}-n$$
Simplified down to
$$(4n^2+2n)\sum_{k=0}^{2n}F_k^2+\sum_{k=0}^{2n}(k^2-k-4nk)F_k^2=2F_{2n}F_{2n+1}-2n$$
finally down to
$$\sum_{k=0}^{2n}(k^2-k-4nk)F_k^2=(2-2n-4n^2)F_{2n}F_{2n+1}-2n$$
we are not sure what to do next...
REPLY [4 votes]: This answer uses
$$\sum_{k=0}^{n}F_k^2=F_nF_{n+1}\tag4$$
$$\sum_{k=0}^{n}kF_k^2=nF_nF_{n+1}-F_n^2+\frac{1+(-1)^{n-1}}{2}\tag5$$
$$(-1)^n=F_{n-1}F_{n+1}-F_n^2\tag6$$
The proofs are written at the end of the answer.
We want to prove that
$$\sum_{k=0}^{m}T_{m-k}F_k^2=F_{m}F_{m+1}-\left\lceil\frac{m}{2}\right\rceil\tag7$$
Let us prove $(7)$ by induction on $m$ using $(4)(5)(6)$.
$(7)$ holds for $m=1$.
Supposing that $(7)$ holds for some $m\ (\ge 1)$ gives
$$\begin{align}\sum_{k=0}^{m+1}T_{m+1-k}F_k^2&=\sum_{k=0}^{m}(T_{m-k}+m+1-k)F_k^2\\\\&=\left(\sum_{k=0}^{m}T_{m-k}F_k^2\right)+\left(\sum_{k=0}^{m}(m+1-k)F_k^2\right)\\\\&=F_{m}F_{m+1}-\left\lceil\frac m2\right\rceil+(m+1)\left(\sum_{k=0}^{m}F_k^2\right)-\left(\sum_{k=0}^{m}kF_k^2\right)\\\\&=F_{m}F_{m+1}-\left\lceil\frac m2\right\rceil+(m+1)F_{m}F_{m+1}-\left(mF_{m}F_{m+1}-F_{m}^2+\frac{1+(-1)^{m-1}}{2}\right)\\\\&=2F_{m}F_{m+1}+F_{m}^2-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=2F_{m}F_{m+1}+F_{m-1}F_{m+1}-(-1)^m-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=F_{m+1}(F_m+F_m+F_{m-1})-(-1)^m-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=F_{m+1}F_{m+2}-(-1)^m-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=F_{m+1}F_{m+2}-\left\lceil\frac{m+1}{2}\right\rceil\qquad\blacksquare\end{align}$$
Let us prove $(4)$ by induction on $n$.
$$\sum_{k=0}^{n}F_k^2=F_nF_{n+1}\tag4$$
$(4)$ holds for $n=1$.
Supposing that $(4)$ holds for some $n\ (\ge 1)$ gives
$$\sum_{k=0}^{n+1}F_k^2=F_nF_{n+1}+F_{n+1}^2=F_{n+1}(F_n+F_{n+1})=F_{n+1}F_{n+2}\qquad \blacksquare$$
Next, let us prove $(5)$ by induction on $n$ using $(6)$.
$$\sum_{k=0}^{n}kF_k^2=nF_nF_{n+1}-F_n^2+\frac{1+(-1)^{n-1}}{2}\tag5$$
$$(-1)^n=F_{n-1}F_{n+1}-F_n^2\tag6$$
$(5)$ holds for $n=1$.
Supposing that $(5)$ holds for some $n\ (\ge 1)$ gives
$$\begin{align}\sum_{k=0}^{n+1}kF_k^2&=nF_nF_{n+1}-F_n^2+\frac{1+(-1)^{n-1}}{2}+(n+1)F_{n+1}^2\\\\&=nF_nF_{n+1}+nF_{n+1}^2+F_{n+1}^2-F_n^2+\frac{1+(-1)^{n-1}}{2}\\\\&=nF_{n+1}(F_n+F_{n+1})+(F_{n+1}+F_n)(F_{n+1}-F_n)+\frac{1+(-1)^{n-1}}{2}\\\\&=nF_{n+1}F_{n+2}+F_{n+2}F_{n+1}-F_{n+2}F_n+\frac{1+(-1)^{-1}}{2}\\\\&=(n+1)F_{n+1}F_{n+2}-(F_{n+1}^2+(-1)^{n+1})+\frac{1+(-1)^{n-1}}{2}\\\\&=(n+1)F_{n+1}F_{n+2}-F_{n+1}^2+\frac{1+(-1)^n}{2}\end{align}$$
Finally, let us prove $(6)$ by induction on $n$.
$$(-1)^n=F_{n-1}F_{n+1}-F_n^2\tag6$$
$(6)$ holds for $n=1$.
Supposing that $(6)$ holds for some $n\ (\ge 1)$ gives
$$\begin{align}(-1)^{n+1}&=-(-1)^n\\\\&=-F_{n-1}F_{n+1}+F_n^2\\\\&=-(F_{n+1}-F_n)F_{n+1}+F_n^2\\\\&=-F_{n+1}^2+F_nF_{n+1}+F_n^2\\\\&=-F_{n+1}^2+F_n(F_{n+1}+F_n)\\\\&=F_{n}F_{n+2}-F_{n+1}^2\qquad\blacksquare\end{align}$$<|endoftext|>
TITLE: On the integral $\int_{e}^{\infty}\frac{t^{1/2}}{\log^{1/2}\left(t\right)}\alpha^{-t/\log\left(t\right)}dt,\,\alpha>1.$
QUESTION [13 upvotes]: Let $\alpha>1$. I would like to find a closed form or an upper bound of
$$f\left(\alpha\right)=\int_{e}^{\infty}\frac{t^{1/2}}{\log^{1/2}\left(t\right)}\alpha^{-t/\log\left(t\right)}dt.$$
For the closed form I'm very skeptical but I have trouble also for an upper bound. I tried, manipulating a bit, to integrate w.r.t. $\alpha$ since $$\frac{\partial}{\partial\alpha}\alpha^{-t/\log\left(t\right)}=-\frac{t}{\alpha\log\left(t\right)}\alpha^{-t/\log\left(t\right)}$$ but it seems quite useless and at this moment I didn't see a good way to proceed.
Maybe it is interesting to see, using some trivial substitutions, that $$f\left(\alpha\right)=\int_{e}^{\infty}\frac{\left(e^{3/2}\right)^{-W_{-1}\left(-1/v\right)}}{v\left(-W_{-1}\left(-\frac{1}{v}\right)\right){}^{1/2}}\frac{W_{-1}\left(-\frac{1}{v}\right)}{W_{-1}\left(-\frac{1}{v}\right)+1}\alpha^{-v}dv$$ $$=\int_{e}^{\infty}g\left(w\right)\alpha^{-v}dv$$ where $W_{-1}\left(x\right)$ is the Lambert $W$ function. So it seems that $f(\alpha)$ is somehow connected to the Mellin transform of $g(w).$
Thank you.
REPLY [2 votes]: A naif but probably efficient approach is to exploit the fact that the logarithm function is approximately constant on short intervals and
$$ \frac{1}{\sqrt{N}}\int_{e^N}^{e^{N+1}}\sqrt{t}\,\alpha^{-t/N}\,dt =\frac{N\sqrt{\pi}}{2\log(\alpha)^{3/2}}\,\text{Erf}\left(\sqrt{\frac{e^N\log\alpha}{N}}\right)$$
can be efficiently approximated through the continued fraction for the error function.
We may also consider this fact: through the Laplace transform
$$ \int_{0}^{+\infty}\sqrt{t}\exp\left(-\frac{t\log\alpha}{N}\right)\,dt = \int_{0}^{+\infty}\mathcal{L}^{-1}\left(\frac{1}{\sqrt{t}}\right)\,\mathcal{L}\left(t \exp\left(-\frac{t\log\alpha}{N}\right)\right)\,ds $$
we get the following integral:
$$ \int_{0}^{+\infty}\frac{N^2}{\sqrt{\pi s}(Ns+\log\alpha)^2}\,ds =\frac{2}{\sqrt{\pi}}\int_{0}^{+\infty}\frac{1}{(s^2+\frac{\log\alpha}{N})^2}\,ds$$
that is simple to estimate in terms of $N$ and $\alpha$. The original integral is a weigthed sum of these integrals, that according to my computations should behave like
$$\exp\left(-\log(\alpha)^{3/2}\right).$$
But I am probably over-complicating things, and we may recover the same bound by just applying a modified version of Laplace's method to the original integral.<|endoftext|>
TITLE: Why can a matrix without a full rank not be invertible?
QUESTION [12 upvotes]: I know you could just say because the $\det = 0$.
But during the introduction of determinants the professor said, obviously if two columns of the matrix are linearly dependent the matrix can't be inverted, therefore it is zero. He made it sound like it is an intuitive thing, a simple observation, but I always have to resort to the properties of determinants to show it.
How does one trivially see that you can not invert a matrix without a full rank?
REPLY [17 votes]: Suppose that the columns of $M$ are $v_1, \ldots, v_n$, and that they're linearly dependent. Then there are constants $c_1, \ldots, c_n$, not all $0$, with
$$
c_1 v_1 + \ldots + c_n v_n = 0.
$$
If you form a vector $w$ with entries $c_1, \ldots, c_n$, then (1) $w$ is nonzero, and (2) it'll turn out that
$$
Mw = c_1 v_1 + \ldots + c_n v_n = 0. (*)
$$
(You should write out an example to see why this first equality is true).
Now we also know that
$$
M0 = 0. (**)
$$
So if $M^{-1}$ existed, we could say two things:
$$
0 = M^{-1}0 \ (**)\\
w = M^{-1} 0\ (*)
$$
But since $w \ne 0$, these two are clearly incompatible. So $M^{-1}$ cannot exist.
Intuitively: a nontrivial linear combination of the columns is a nonzero vector that's sent to $0$, making the map noninvertible.
But when you really get right down to it: proving this, and things like it, help you develop your understanding, so that statements like this become intuitive. Think about something like "the set of integers that have integer square roots". I say that it's intuitively obvious that $19283173$ is not one of these.
Why is that "obvious"? Because I've squared a lot of numbers, and all the squares have a last digit that's either $0, 1, 4, 5, 6,$ or $9$ (because those are the last digits of squares of single-digit numbers). Now that I've told you that, my statement about "intuitively obvious" is obvious to you, too. But until you'd at least learned a little about integer squares by investigating them, your intuition wasn't as good as mine. Sometimes "intuition" is just another name for "applied experience."<|endoftext|>
TITLE: Prove that $f(7) = 56$ given $f(1)$ and $f(9)$ and $f' \le 6$
QUESTION [6 upvotes]: Let $f(x)$ be continues function at $[1,9]$ and differentiable at $(1,9)$ and also $f(1) = 20 , f(9) = 68 $ and $ |f'(x)| \le 6$ for every $x \in (1,9)$.
I need to prove that $f(7) = 56$.
I started by using the Lagrange theorem and found that there exist $ 16$.
If it goes off, above the line, then we may apply a rather same reasoning.<|endoftext|>
TITLE: Evaluate a limit involving a definite integral
QUESTION [22 upvotes]: Let $(I_n)_{n \geq 1}$ be a sequence such that:
$$I_n = \int_0^1 \frac{x^n}{4x + 5} dx$$
Evaluate the following limit:
$$\lim_{n \to \infty} nI_n$$
All I've been able to find is that $(I_n)$ is decreasing and converges to $0$.
Thank you!
REPLY [3 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
$\ds{\mrm{g}\pars{n,x} \equiv {\pars{1 - x}^{n} \over 9 - 4x}}$
This is an application of Laplace Method:
\begin{align}
\lim_{n \to \infty}\pars{n\int_{0}^{1}{x^{n} \over 4x + 5}\,\dd x} & =
{1 \over 9}\lim_{n \to \infty}\bracks{n
\int_{0}^{1}{\pars{1 - x}^{n} \over 1 - 4x/9}\,\dd x} =
{1 \over 9}\lim_{n \to \infty}\pars{n\int_{0}^{\infty}\expo{-nx}\,\dd x}
\\[5mm] & =
\bbx{\ds{1 \over 9}}
\end{align}<|endoftext|>
TITLE: Are there more than $\beth_1$ non-homeomorphic topological subspaces of $\Bbb R$?
QUESTION [8 upvotes]: I've been asked by a younger student about a certain claim he had on a classification of topological subsets of $\Bbb R$. The overall idea was a bit fuzzy, but in hindsight it revolved around taking the $\sigma$-algebra generated by six (Borel) subsets + translations. I successfully (and, I hope, instructively) argumented against it. However, this led me to the question:
Could I just cut it short and fancy with a cardinality argument? Specifically, if $\sim$ is the homeomorphism equivalence on $\mathcal P(\Bbb R)$, is $\operatorname{card}\left(\mathcal P(\Bbb R)/\sim\right)>\beth_1$ ?
Intuitively, I'd say yes, because, "come on, there are $\beth_2$ nasty non-Borel sets". And, "at chit-chat level, homeomorphisms $(a,b)\to(c,d)$ are monotone functions". However, this is neither a proof nor a sufficient reason for my question to even be decidable in ZFC.
In fact, on the topic I found this weaker fact: "closed subsets up to homeomorphism are exactly $\beth_1$".
Thank you for links and/or answers.
REPLY [10 votes]: Every subset of $\mathbb{R}$ has a countable dense subset. If $X\subseteq\mathbb{R}$ and $A\subseteq X$ is a countable dense subset, a homeomorphism from $X$ to another subset $Y\subseteq\mathbb{R}$ is determined by its restriction to $A$. So there is an injection from the set of homeomorphisms from $X$ to other subsets of $\mathbb{R}$ to the set of functions from $A$ to $\mathbb{R}$. There are only $\beth_1$ functions from $A$ to $\mathbb{R}$ since $A$ is countable.
So each subset of $\mathbb{R}$ can be homeomorphic to at most $\beth_1$ other subsets of $\mathbb{R}$. Since there are $\beth_2>\beth_1$ different subsets of $\mathbb{R}$, there must be $\beth_2$ different homeomorphism classes of subsets of $\mathbb{R}$.<|endoftext|>
TITLE: Hitting time of an open set is not a stopping time for Brownian Motion
QUESTION [7 upvotes]: Let $(B_t)$ be a standard Brownian motion and $\mathcal F_t$ the associated canonical filtration. It's a standard result that the hitting time for a closed set is a stopping time for $\mathcal F_t$ and the hitting time for an open set is a stopping time for $\mathcal F_{t+}$.
Is there an elementary way to see that the hitting time for an open set is not in general a stopping time for $\mathcal F_t$? Say the hitting time for an open interval $(a,b)$?
I'm interested in this question because it would show the filtration generated by a right-continuous process need not be right-continuous. There are other counterexamples for this on M.SE but they're all somewhat artificial.
My apologies if this is obvious. I just started learning about such things.
REPLY [7 votes]: No, it's not at all obvious.
If we interpret $\mathcal{F}_t$ as "information up to time $t$", it's not surprising that the hitting time of an open interval $(a,b)$ is not an $\mathcal{F}_t$-stopping time. For instance if $B_t(\omega)=a$ for some $t>0$, then the information about the past, i.e. $(B_s(\omega))_{s \leq t}$, is not enough to decide whether $\tau(\omega)=t$; we need a small glimpse into the future.
Making this intuition rigorous is, however, not easy. One possibility is to apply Galmarino's test which states that a mapping $\tau: \Omega \to [0,\infty]$ is a stopping time (with respect to $(\mathcal{F}_t)_{t \geq 0}$) if and only if $$\tau(\omega)=t, B_s(\omega) = B_s(\omega') \, \, \text{for all $s \leq t$} \implies \tau(\omega')=t, \tag{1}$$ see this question. If $\tau$ is the hitting time of an open interval $(1)$ is not satisfied; the easiest way to see this is to consider the canonical Brownian motion, i.e. consider $(B_t)_{t \geq 0}$ as a process on the space of continuous mappings.
Let me finally remark that there are other ways to prove that the filtration generated by a Brownian motion is not right-continuous, see, for instance, this answer.<|endoftext|>
TITLE: A real function which is additive but not homogenous
QUESTION [21 upvotes]: From the theory of linear mappings, we know linear maps over a vector space satisfy two properties:
Additivity: $$f(v+w)=f(v)+f(w)$$
Homogeneity: $$f(\alpha v)=\alpha f(v)$$
which $\alpha\in \mathbb{F}$ is a scalar in the field which the vector space is defined on, and neither of these conditions implies the other one. If $f$ is defined over the complex numbers, $f:\mathbb{C}\longrightarrow \mathbb{C}$, then finding a mapping which is additive but not homogenous is simple; for example, $f(c)=c^*$. But can any one present an example on the reals, $f:\mathbb{R}\longrightarrow \mathbb{R}$, which is additive but not homogenous?
REPLY [18 votes]: If $f : \Bbb{R} \to \Bbb{R}$ is additive, then you can show that $f(\alpha v) = \alpha f(v)$ for any $\alpha \in \Bbb{Q}$ (so $f$ is a linear transformation when $\Bbb{R}$ is viewed as a vector space over $\Bbb{Q}$). As $\Bbb{Q}$ is dense in $\Bbb{R}$, it follows that an additive function that is not homogeneous must be discontinuous. To construct non-trivial discontinuous functions on $\Bbb{R}$ with nice algebraic properties, you usually need to resort to the existence of a basis for $\Bbb{R}$ viewed as a vector space over $\Bbb{Q}$. Such a basis is called a Hamel basis. Given a Hamel basis $B = \{x_i \mid i \in I\}$ for $\Bbb{R}$ (where $I$ is some necessarily uncountable index set), you can easily define a function that is additive but not homogeneous, e.g., pick a basis element $x_i$ and define $f$ such that $f(x_i) = 1$ and $f(x_j) = 0$ for $j \neq i$.
REPLY [10 votes]: Additive but not homogenous functions $f: \mathbb R\to\mathbb R$ have to be a little bit more complicated since one can show that those functions can't be measurable and therefore need the axiom of choice in some way to be constructed.
Consider $\mathbb R$ as a vectorspace over the field $\mathbb Q$ and select a basis $(r_i)_{i\in I}$. Call $(x,i)$ the coefficient of $r_i$ in the base representation of $x$ with respect to the base $(r_i)_{i\in I}$. Then $x\mapsto (x,i)$ is $\mathbb Q$-linear and therefore especially additive, but it is obviously not $\mathbb R$-homogenous because $(r_i,i) = 1$ and $0 = (r_j,i) = (\frac{r_j}{r_i}\cdot r_i,i)$ for $i\neq j$.<|endoftext|>
TITLE: Is there an explicit formula that gives the value of $\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}$ for $n$ square roots?
QUESTION [10 upvotes]: $$\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}$$
I know that with infinite square roots it's $x = \sqrt{2 + x}$, but what about a non-infinite number of roots? I've searched around a lot for this, and can't find anything useful, nor can I make a dent in the problem myself.
Maybe I'm searching using the wrong vocabulary?
REPLY [14 votes]: Elaborating on Michael Rozenberg's answer:
Note that
$$\sqrt{2+2\cos\alpha} = \sqrt{4\cos^2\left(\frac{\alpha}{2}\right)} = 2\cos\left(\frac{\alpha}{2}\right)$$
So,
$$\sqrt{2} = 2\cos\left(\frac{\pi}{4}\right)$$
$$\sqrt{2+\sqrt{2}} = 2\cos\left(\frac{\pi}{8}\right)$$
$$\vdots$$
Thus, if we have $n$ square roots, we have
$$x=2\cos\left(\frac{\pi}{2^{n+1}}\right)$$<|endoftext|>
TITLE: Calculating $\int \sqrt{1 + x^{-2}}dx$
QUESTION [6 upvotes]: I would like to find
$$\int \sqrt{1 + x^{-2}}dx$$
I have found that it is equivalent to
$$
\int \frac{\sqrt{1 + x^2}}{x}dx
$$
but I am not sure what to do about it. With trig substitution $x = \tan(x)$ I get
$$
\int \frac{1}{\sin(\theta)\cos^2(\theta)}d\theta
$$
but that seems to be a dead end.
REPLY [2 votes]: Set $x^{-1}=\sinh t$, so $\sqrt{1+x^{-2}}=\cosh t$. Then
$$
dx=-\frac{\cosh t}{\sinh^2t}\,dt
$$
and the integral becomes
$$
-\int\frac{\cosh^2t}{\sinh^2t}\,dt=
-\int\frac{1+\sinh^2t}{\sinh^2t}\,dt=\frac{\cosh t}{\sinh t}-t+c=
\sqrt{1+x^2}-\operatorname{arsinh}\frac{1}{x}+c
$$
You can find a more explicit expression for $\operatorname{arsinh}\frac{1}{x}$ by setting
$$
\frac{1}{x}=\frac{e^t-e^{-t}}{2}
$$
or
$$
xe^{2t}-2e^t-x=0
$$
so
$$
e^t=\frac{1+\sqrt{1+x^2}}{x}
$$
The final antiderivative is
$$
\sqrt{1+x^2}-\log\frac{1+\sqrt{1+x^2}}{x}+c
$$<|endoftext|>
TITLE: Triple Integral $\iiint x^{2n}+y^{2n}+z^{2n}dV$
QUESTION [6 upvotes]: Evaluate:
$$\iiint_{x^2+y^2+z^2 \leqslant 1} x^{2n}+y^{2n}+z^{2n} dV $$
I have tried to convert to spherical polars and then compute the integral, but it gets really messy because of the 2n power. Any tips?
REPLY [6 votes]: First observation: it is symmetric in $x,y,z$, so by linearity we have
$$\iiint_{x^2+y^2+z^2 \leqslant 1} x^{2n}+y^{2n}+z^{2n} dV =3\iiint_{x^2+y^2+z^2 \leqslant 1} z^{2n} dV.$$
Choosing spherical coordinates it becomes
$$3\iiint_{x^2+y^2+z^2 \leqslant 1} (r\cos \theta)^{2n} dV$$
where $dV= r^2 \sin \theta \ \text{d}r \ \text{d}\theta \ \text{d}\phi$.
Thus the integral simplifies to
$$3 \int_0^{2\pi}\int_0^{\pi}\int_0^1 r^{2(n+1)} (\cos \theta)^{2n} \sin \theta \ \text{d}r \ \text{d}\theta \ \text{d}\phi = \frac{3}{2n+3}2 \pi \int_0^{\pi}(\cos \theta)^{2n} \sin \theta \ \text{d}\theta. $$
Using that
$$\int_0^{\pi}(\cos \theta)^{2n} \sin \theta \ \text{d}\theta = \frac{2}{2n+1} $$
we have
$$\iiint_{x^2+y^2+z^2 \leqslant 1} x^{2n}+y^{2n}+z^{2n} dV= \frac{3}{2n+3}2 \pi \frac{2}{2n+1}.$$<|endoftext|>
TITLE: Why this algorithm for egyptian fractions doesn't terminate in ~$2$% cases?
QUESTION [14 upvotes]: I thought up yet another algorithm for egyptian fraction expansion which turned out to be very effective (in terms of the length and the denominator size) - in most cases. However, for some fractions it doesn't terminate at all - it leads to an infinite loop.
Here is the algorithm:
Let $\frac{p}{q}<1$ and $p,q$ coprime. Find the minimal $m$ such that $q/(p+m)$ is an integer. We only need to consider $m \in [1,q-p]$. Represent the fraction as: $$\frac{p}{q}=\frac{p+m}{q}-\frac{m}{q}$$
Now to obtain a positive term instead of a negative one, we split the first fraction in two:
$$\frac{p+m}{q}-\frac{m}{q}=\frac{p+m}{2q}+\frac{p+m}{2q}-\frac{m}{q}=\frac{p+m}{2q}+\frac{p-m}{2q}$$
Here is a conditional: if $pm$ then $\frac{p}{q} \to \frac{p-m}{2q}$ and we repeat the first step of the algorithm.
The working name is complementary method, so I will use CM to denote it for now.
Despite its simplicity (it's not at all obvious why we are dividing by $2$ instead of using some other way to expand the first term) the algorithm works very well. In a lot of cases it beats every other algorithm I tried.
Since the greedy algorithm and Engel expansion are usually bad in terms of denominator size, I used two other methods to compare: Binary Remainder Method and my own 'Splitting-Joining method' (the details can be found in my Mathematica SE question. I also compared it to a modification of Engel proposed by Daniel Fischer in this answer and CM is mostly better for the examples he provided.
Some examples of the best results (a sequence of denominators is provided in each case):
4/49: CM {14,98}; BR {16,98,196,392,784}; SJ {13,325,925,1813}
3/35: CM {14,70}; BR {16,70,140,560}; SJ {20,28}
47/104: CM {4,8,13}; BR {4,8,16,104,208}; SJ {4, 14, 26, 28, 52, 70, 104, 130, 182}
Some examples of the worst results (but still valid - algorithm terminates):
94/191: CM {4, 8, 16, 32, 64, 256, 512, 1024, 2048, 4096, 8192, 24448, 48896, 97792, 195584, 391168, 782336, 1564672}
65/157: CM {4, 8, 32, 256, 512, 1024, 2048, 4096, 10048, 20096, 40192, 80384, 160768, 643072}
52/139: CM {4, 16, 32, 64, 128, 278, 556, 1112, 2224, 8896, 17792}
However, in these cases both BR and SJ methods also give long expansions with large denominators.
Now the real problem which I'm trying to solve - why in some cases the algorithm doesn't terminate, but leads to loops? From large scale experiments I estimates the proportion of such fractions to be about $1.8$% (for numerators and denominators below $1000$).
The examples of such 'bad' fractions are:
$$\frac{41}{111},\frac{5}{87},\frac{8}{87},\frac{14}{87},\frac{47}{87},\frac{61}{102},\frac{17}{69},\frac{33}{119},\frac{38}{93},\frac{77}{177},\frac{32}{57},\frac{99}{185},\frac{98}{141},\frac{100}{129},\dots$$
The most common denominator is $87$ for some reason. Note that not all of the denominators and/or numerators are prime.
The problem can be solved by using $\frac{p-1}{q}$ instead, but not in every case, for example $7/87$ doesn't work either. However, both $6/87$ and $2/87$ work, and give different denominators, so we can expand $8/87$ after all.
I think the problem might be related to the use of the expansion $1=1/2+1/2$ to divide the fist term. However, when I tried some other schemes, I didn't get good results (for example, I've got repeating fractions when using $1=1/3+2/3$).
The working Mathematica code for the algorithm is:
x=6/87;
p0=Numerator[x];
q0=Denominator[x];
S=0;
Nm=100;
a=Table[1,{k,1,Nm}];
m=Table[1,{k,1,Nm}];
p1=p0;
q1=q0;
j=1;
While[Abs[p0]>1&&j<=Nm&&q0<10^35,M=Catch[Do[If[FractionalPart[q0/(p0+k)]<1/10^55,Throw[k]],{k,0,q0-p0}]];
m[[j]]=M;
a[[j]]=(p0+M)/(2 q0);
p1=Numerator[a[[j]]-M/q0];
q1=Denominator[a[[j]]-M/q0];
While[p1<0,a[[j]]=a[[j]]/2;
p1=Numerator[a[[j]]+p1/q1];
q1=Denominator[a[[j]]+p1/q1]];
If[a[[j]]!=1,S+=a[[j]]];
j++;
p0=p1;q0=q1];
a[[j]]=p1/q1;
S+=a[[j]];
Denominator[Table[a[[k]],{k,1,j}]]
And the second question: how to modify the algorithm so it always terminates?
Update:
Among the first $10000$ fractions with $p \neq 1$ in lexicographic order there are $269$ fractions for which this algorithm doesn't work. (Seems to be more than $2$%). They are:
5/33,5/51,2/55,32/55,4/57,7/57,13/57,23/57,32/57,6/65,43/66,4/69,8/69,11/69,17/69,40/69,50/69,8/85,59/85,4/87,5/87,7/87,8/87,14/87,34/87,47/87,62/87,65/87,76/87,5/93,7/93,10/93,19/93,38/93,50/93,67/93,6/95,8/95,63/95,61/102,9/110,59/110,4/111,7/111,8/111,13/111,16/111,22/111,25/111,31/111,41/111,44/111,59/111,62/111,68/111,82/111,7/114,65/114,71/114,83/114,103/114,6/115,11/115,17/115,63/115,3/119,5/119,10/119,15/119,16/119,33/119,37/119,45/119,61/119,66/119,67/119,71/119,73/119,78/119,96/119,101/119,4/123,5/123,8/123,10/123,11/123,17/123,20/123,26/123,29/123,35/123,46/123,49/123,67/123,70/123,76/123,86/123,92/123,4/129,5/129,10/129,13/129,14/129,19/129,28/129,31/129,37/129,47/129,53/129,71/129,74/129,80/129,91/129,100/129,77/130,53/132,119/132,11/138,31/138,77/138,85/138,91/138,103/138,4/141,5/141,7/141,8/141,14/141,16/141,17/141,23/141,32/141,35/141,41/141,52/141,55/141,74/141,79/141,82/141,88/141,98/141,101/141,110/141,121/141,3/143,7/143,21/143,40/143,42/143,60/143,73/143,80/143,98/143,120/143,138/143,6/145,8/145,13/145,21/145,64/145,79/145,93/145,122/145,6/155,7/155,9/155,12/155,14/155,69/155,99/155,102/155,107/155,131/155,5/159,7/159,10/159,11/159,14/159,19/159,20/159,23/159,32/159,38/159,58/159,64/159,83/159,85/159,91/159,113/159,116/159,125/159,136/159,9/161,11/161,101/161,103/161,16/165,41/165,61/165,116/165,151/165,33/170,101/170,7/174,37/174,43/174,65/174,95/174,97/174,101/174,103/174,115/174,155/174,8/175,11/175,78/175,108/175,111/175,113/175,116/175,148/175,4/177,5/177,8/177,10/177,11/177,13/177,17/177,19/177,20/177,22/177,26/177,29/177,35/177,38/177,44/177,64/177,67/177,70/177,77/177,94/177,95/177,97/177,103/177,122/177,128/177,131/177,137/177,140/177,154/177,4/183,5/183,7/183,10/183,11/183,13/183,14/183,19/183,20/183,22/183,28/183,34/183,37/183,40/183,49/183,65/183,68/183,71/183,74/183
Update 2 (Important)
The question was deleted for a time, because for some of the listed fraction algorithm seems to work just fine if it's done by hand. There is some error in my code, which I hadn't been able to find yet.
But there are fractions which lead to loops by hand as well (such as $41/111$) so the question still stands.
REPLY [6 votes]: There's still an error in your code, in the While loop you take if $p
TITLE: Geodesics between singular points in a translation surface
QUESTION [5 upvotes]: Consider a translation surface $X$ with $n\ge 2$ points of conical singularity $x_1,\dots,x_n$ of cone angle $\theta_i=2k_i\pi$, $k_i>1$.
Suppose that the geodesic $\sigma$ from $x_1$ to $x_2$ for the singular flat metric is a straight segment. By "geodesic" I mean a global geodesic, meaning that the length of $\sigma$ with respect to the singular flat metric equals the distance of the two points with respect to the induced metric. Now consider any smooth point $x\in X$ such that the geodesic $\tau$ from $x_1$ to $x$ for the singular flat metric is a segment and such that the angle at $x_1$ between $\sigma$ and $\tau$ is greater than $\pi$ (by "angle" I don't mean Alexandrov's definition of angle, but simply the angle measured at the conical point, where the total angle is $2k_1\pi$).
Question 1: Is $\sigma\ast \tau^{-1}$ always the geodesic from $x$ to $x_2$? Or could such geodesic be a straight segment or pass through another singular point?
Question 2: If $x_2$ were a smooth point then the answer to the previous question is always yes?
REPLY [2 votes]: In general, given a nonpositively curved manifold (equipped with a possibly singular Riemannian metric) which nontrivial topology, (local) geodesics need not be global distance minimizers and no local assumptions can help you with this. As a specific example for your question, start with the flat 2-torus $T^2$. Let $c$ be the shortest closed geodesic on $T^2$. (There might be several, pick one.) Pick two points $p, q\in c$ which divide the geodesic into arcs of equal length. Pick also a point $r\in T^2 - c$. Now, consider the 3-fold branch cover $S\to T^2$ ramified (with degree 3) at the points $p, r$. Lift the flat metric on $T^2$ to a singular flat metric on $S$. Let $x$ be the preimage of $p$ in $S$. The loop $c$ will lift to several different loops on $S$, all of the length equal to the length of $c$, one of them will be a loop $\tilde{c}$ which makes the angle $3\pi$ at $x$. The point $q$ will have three preimages in $S$, one of them will be on $\tilde{c}$, I will denote it by $y$. The loop $\tilde{c}$ is the concatenation of two arcs $\sigma=yx$ and $\tau^{-1}=xy$ of equal length. Since the loop $c$ was the shortest closed geodesic on $T^2$, both arcs will be distance-minimizers on $S$. However, their concatenation is, of course, is not a distance-minimizer, since it is a closed geodesic on $S$.<|endoftext|>
TITLE: Tensor Product of dual linear maps
QUESTION [5 upvotes]: Suppose $V$ and $W$ are finite dimensional linear spaces and $V^*$ as well as $W^*$ are their appropriate linear duals.
Now let $f: V \to W$ and $g: V \to W$ be linear maps. Is the following identity correct?
$f^* \otimes g^* = (f \otimes g)^*$
That is the tensor product of the dual linear maps, is the linear dual of the tensor product of the maps.
Can't find this neither on the Wikipedia page of the tensor product, nor on the Wikipedia page of the dual linear maps. Therefore its properly wrong? Don't think so.
REPLY [3 votes]: The problem is that the two maps you are considering do not have the same domain and codomain:
$$f^*\otimes g^*:W^*\otimes W^*\to V^*\otimes V^*\quad\quad (f\otimes g)^*:(W\otimes W)^*\to(V\otimes V)^*$$
so they cannot possibly be equal as maps. However, we do have a canonical map
$$\eta_W:W^*\otimes W^*\to (W\otimes W)^*,\quad (\eta_W(\phi\otimes\psi))(w_1\otimes w_2) = \phi(w_1)\psi(w_2)$$ (as explained in this question) which is an isomorphism in the finite dimensional case. Although $f^*\otimes g^*$ and $(f\otimes g)^*$ are not equal, I believe that the following is a factorization of $f^*\otimes g^*$:
$$W^*\otimes W^*\xrightarrow{\ \eta_W \ }(W\otimes W)^*\xrightarrow{\ (f\otimes g)^* \ }(V\otimes V)^*\xrightarrow{\ \eta_V^{-1} \ }V^*\otimes V^*.$$
To prove this, it suffices to show that the square:
$$\require{AMScd} \begin{CD}
W^*\otimes W^* @>{\eta_W}>> (W\otimes W)^*\\ @V{f^*\otimes g^*}VV @VV{(f\otimes g)^*}V\\
V^*\otimes V^* @>>{\eta_V}> (V\otimes V)^*
\end{CD}
$$
commutes. Let $\phi\otimes\psi\in W^*\otimes W^*$. We want to show that
$$((f\otimes g)^*\circ\eta_W)(\phi\otimes\psi) \quad\text{and}\quad ((f^*\otimes g^*)\circ\eta_V)(\phi\otimes\psi)$$
are equal in $(V\otimes V)^*$ (so equal as maps from $V\otimes V$ to the underlying field $\mathbb{F}$). With that in mind, let $v_1\otimes v_2\in V\otimes V$. Then, we have
\begin{align}
\notag ((f\otimes g)^*\circ\eta_W(\phi\otimes\psi))(v_1\otimes v_2) &= (\eta_W(\phi\otimes\psi)\circ(f\otimes g))(v_1\otimes v_2)\\
\notag &= \eta_W(\phi\otimes\psi)(f(v_1)\otimes g(v_2))\\
\notag &= \phi(f(v_1))\cdot \psi(g(v_2))
\end{align}
and:
\begin{align}
\notag (\eta_V\circ(f^*\otimes g^*)(\phi\otimes\psi))(v_1\otimes v_2) &= \eta_V((\phi\circ f)\otimes(\psi\circ g))(v_1\otimes v_2)\\
\notag &= \phi(f(v_1))\cdot \psi(g(v_2)).
\end{align}
Since everything was arbitrary, we have $f^*\otimes g^* = \eta_V^{-1}\circ (f\otimes g)^*\circ\eta_W$.<|endoftext|>
TITLE: weak*-convergence and weak operator topology - multiplication operator
QUESTION [6 upvotes]: The setting : let $(\Omega,\mu)$ $\sigma$-finite measure space and let $M_\phi : L^2(\Omega,\mu) \to L^2(\Omega,\mu)$ the multiplication operator with $\phi \in L^{\infty}(\Omega,\mu)$
I want to show :
If $M_{\phi_{i}} \to M_\phi $ in weak operator topology, then $\phi_i \to \phi$ in weak*-topology
I already managed to show the reverse statement.
I don't know if this helps or even is true : Maybe I can write every $f \in L^1$ as product of two functions in $L^2$ ?
REPLY [2 votes]: As alluded to in your question, the hardest part is writing an $L^1$ function as a product of two $L^2$ functions. But this turns out to be easier than expected.
Suppose $M_{\phi_i}$ is WOT-convergent to $M_\phi$, and let $f\in L^1$ be given. Then we can write $f=|f|e^{i\theta}$, where $\theta$ is a measurable function. Now define
\begin{align*}
g&=|f|^{1/2}e^{i\theta}, \\
h&=|f|^{1/2}.
\end{align*}
Then $g,h\in L^2$ and we have
$$\langle M_{\phi_i}g,h\rangle=\int\phi_i|f|e^{i\theta}\ d\mu
=\int\phi_if\ d\mu. $$
By hypothesis, $\langle M_{\phi_i}g,h\rangle\to\langle M_\phi g,h\rangle$, and thus
$$ \int\phi_if\ d\mu\to\int\phi f\ d\mu. $$
Since $f\in L^1$ was arbitrary, we know $\{\phi_i\}$ is weak$^*$-convergent to $\phi$.<|endoftext|>
TITLE: When is a matrix function the Jacobian matrix of another mapping
QUESTION [5 upvotes]: Suppose $J(x)$ is a continuous matrix function $\mathbb{R}^D \to \mathbb{R}^D \times \mathbb{R}^D$. Do there always exist a mapping $f: \mathbb{R}^D \to \mathbb{R}^D$ so that $J = \nabla f$. If not, are there well-known conditions such that this mapping exists?
REPLY [2 votes]: Let $J_{1}, \ldots, J_{D}$ denote the columns of $J$. Then each
$J_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}$ and so you are trying to find
functions $f_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}$ such that $J_{i}=\nabla
f_{i}$ for every $i=1, \ldots, D$. To construct counter-examples is easy. If
$J$ is $C^{1}$ and not just continuous, since the domain is $\mathbb{R}^{D}$,
then a necessary and sufficient condition for each $J_{i}$ to be the gradient
of a function is that $J_{i}$ is irrotational, that is, $\frac{\partial
J_{i,j}}{\partial x_{k}}=\frac{\partial J_{i,k}}{\partial x_{j}}$ for all $j$,
$k$, where $J_{i}=(J_{i,1},\ldots,J_{i,D})$. In $\mathbb{R}^{2}$ take
$J_{1}(x,y)=(y,2x)$ and anything you want for $J_{2}$. Then $\frac{\partial
}{\partial y}(y)=1\neq\frac{\partial}{\partial x}(2x)=2$, and so $J_{1}$ is
not irrotational.
If $J$ is just continuous, then a necessary and sufficient condition for each
$J_{i}$ to be the gradient of a function is that $\int_{\gamma}J_{i}=0$ for
every closed curve $\gamma$. This is not so easy to use because you have to check every closed curve, but if you find one for which the integral is nonzero, then you immediately know that $J_{i}$ cannot be the gradient of a function.
You can find all this stuff in Fleming "Functions of several variables". Look
for exact differential forms.<|endoftext|>
TITLE: How to evaluate $\sum\limits_{n \geq 0} \left(S_{n + 2} + S_{n + 1}\right)^2(-1)^n$, given the multivariable recurrence relation?
QUESTION [5 upvotes]: The given multivariable recurrence relation is that for every $n \geq 1$
$$S_{n + 1} = T_n - S_n$$
where $S_1 = \dfrac{3}{5}$ and $T_1 = 1$. Both $T_n$ and $S_n$ depend on the following condition
$$
\dfrac{T_n}{S_n} = \dfrac{T_{n + 1}}{S_{n + 1}} = \dfrac{T_{n + 2}}{S_{n + 2}} = \dots
$$
The goal is to evaluate
$$\sum\limits_{n \geq 0} \left(S_{n + 2} + S_{n + 1}\right)^2 (-1)^n$$
Since the change between $T_n$ and $T_{n + 1}$ is not constant, I believe that the way to approach this problem is to have all terms with consistent coefficient. However, I am not skillful enough to simplify the summation into a single variable.
REPLY [4 votes]: Notice that
$$\frac53=\frac{T_1}{S_1}=\frac{T_n}{S_n}$$
Thus, $T_n=\frac53S_n$. Putting this in, we get
$$S_{n+1}=\frac23S_n$$
which is a geometric sequence. The general form is then $S_n=\frac35\times\left(\frac23\right)^n$, so we have
$$\text{Sum}=\frac49\sum_{n\ge0}a^n$$
where $a=-\frac49$ is a very simple geometric series.<|endoftext|>
TITLE: How are these problems called and how are they solved?
QUESTION [5 upvotes]: I'm self learning calculus and I stumbled upon the following problem:
Express
$I_n =\int \frac{dx}{(x^2+a^2)^n}$
Using $I_{n-1}$ ($a$ is a positive parameter and $n=2,3,4,...$)
Is this about double integrals? Could anyone please elaborate a bit more so I can learn how to solve this type of problems?
=================
EDIT
Continuing @SimplyBeautifulArt's answer:
$I_n=a^{1-2n}\int{cos^{2n-2}(u)du} = a^{1-2n}\int{cos^{2n-3}(u)cos(u)du}$
$I_{n-1}=a^{3-2n}\int{cos^{2n-4}(u)du}$
Integrating by parts($f:cos^{2n-3}(u); dg:cos(u)du$):
$I_n=a^{1-2n}(cos^{2n-3}(u)sin(u) + (2n-3)\int{cos^{2n-4}(u)sin^2(u)du})$
$I_n=a^{1-2n}(cos^{2n-3}(u)sin(u) + (2n-3)(\int{cos^{2n-4}(u)du} -\int{cos^{2n-2}(u)du}))$
$I_n=a^{1-2n}cos^{2n-3}(u)sin(u) + (2n-3)(\frac{a^{3-2n}}{a^2}\int{cos^{2n-4}(u)du} -a^{1-2n}\int{cos^{2n-2}(u)du})$
$I_n=a^{1-2n}cos^{2n-3}(u)sin(u) + (2n-3)(\frac{I_{n-1}}{a^2} -I_n)$
$I_n=(a^{1-2n}cos^{2n-3}(u)sin(u) + (2n-3)\frac{I_{n-1}}{a^2})/2n-2$
Recall $u=arctan(\frac xa)$
$I_n=(a^{1-2n}\frac{1}{\sqrt{1+(x/a)^2}}^{2n-3}\frac{x/a}{\sqrt{1+(x/a)^2}} + (2n-3)\frac{I_{n-1}}{a^2})/2n-2$
Is that all?
REPLY [3 votes]: Use the substitution $x=a\tan(u)$ to get
$$I_n=\int\frac{a\sec^2(u)}{(a^2\tan^2(u)+a^2)^n}\ du$$
Recall the trigonometric identity $1+\tan^2=\sec^2$ to reduce this to
$$I_n=a^{1-2n}\int\sec^{2-2n}(u)\ du$$
$$=a^{1-2n}\int\cos^{2n-2}(u)\ du$$
This is then handled using pythagorean theorem, integration by parts, and/or substitution, depending on the value of $n$, as described in this post.<|endoftext|>
TITLE: Do integrable functions vanish at infinity?
QUESTION [13 upvotes]: If $f$ is a real-valued function that is integrable over $\mathbb{R}$, does it imply that
$$f(x) \to 0 \text{ as } |x| \to \infty? $$
When I consider, for simplicity, positive function $f$ which is integrable, it seems to me that the finiteness of the "the area under the curve" over the whole line implies that $f$ must decay eventually. But is it true for general integrable functions?
REPLY [4 votes]: There are already good answers, I only wanted to make it more visual. Observe that
\begin{align}
\infty &< \sum_{k=0}^{\infty} k\ \cdot\ \ \ 2^{-k}\ \ =\hspace{10pt}2 < \infty \\
\infty &< \sum_{k=0}^{\infty} k\cdot(-2)^{-k} =-\frac{2}{9} < \infty
\end{align}
(it's easy enough to do by hand, but if you want, here and here are links to WolframAlpha).
Thus, we can use:
$$
f(x) = \sum_{k = 0}^{\infty}k\cdot(-1)^k \cdot \max(0,1-2^k\cdot|x-k|)
$$
Below are diagrams for $|f|$ and $f$:
I hope this helps $\ddot\smile$<|endoftext|>
TITLE: Prove that a Cauchy sequence is convergent
QUESTION [7 upvotes]: I need help understanding this proof that a Cauchy sequence is convergent.
Let $(a_n)_n$ be a Cauchy sequence. Let's prove that $(a_n)_n$ is bounded. In the definition of Cauchy sequence:
$$(\forall \varepsilon>0) (\exists n_\varepsilon\in\Bbb N)(\forall n,m\in\Bbb N)((n,m>n_\varepsilon)\Rightarrow(|a_n-a_m|<\varepsilon))$$
let $\varepsilon=1$. Then we have $n_1\in\Bbb N$ such that $\forall n,m\in\Bbb N (n,m>n_1)\Rightarrow(|a_n-a_m|<1)$. From there for $n>n_1$ we have $|a_n|\leq |a_n-a_{n1+1}|+|a_{n1+1}|(*).$ Now $M=\max\{|a_1|,...|a_{n1}|,1+|a_{n1+1}|\}$ such that $|a_n|\leq M,\ \forall n\in\Bbb N.$
Bounded sequence $(a_n)_n$ has a convergent subsequence $(a_{p_n})_n$, i.e. there exists $a=\lim_n a_{p_n}$. Let's prove $a=\lim_n a_n$. Let $\varepsilon>0$ be arbitrary. From the convergence of subsequence $(a_{p_n})_n$ we have $n'_\varepsilon\in\Bbb N$ such that
$$(n>n'_\varepsilon)\Rightarrow(|a_{p_n}-a|<\frac{\varepsilon}{2}).$$
Because $(a_n)_n$ is a Cauchy sequence, we have $n''_\varepsilon\in\Bbb N$ such that
$$(n,m>n''_\varepsilon)\Rightarrow(|a_n-a_m|<\frac{\varepsilon}{2}).$$
Let $n_\varepsilon=\max\{n'_\varepsilon, n''_\varepsilon\}$ so for $n>n_\varepsilon$ because $p_n\geq n$ we have
$$|a_n-a|\leq|a_n-a_{p_n}|+|a_{p_n}-a|<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon \ (**)$$ i.e. $a=\lim_n a_n$.
$(*)$Where did $|a_n|\leq |a_n-a_{n1+1}|+|a_{n1+1}|$ come from? I understand why that inequality is true, but I don't see the point in writing in like that.
$(**)$ Why is $|a_n-a_{p_n}|<\frac{\varepsilon}{2}?$
REPLY [2 votes]: Here is a quick proof using only the supremum property :
Let $(a_n)$ be a Cauchy sequence of reals. It is bounded [ There is an $N$ such that $ a_N, a_{N+1}, \ldots $ are in $ (a_N - 1, a_N + 1) $. Now $ \max \{ |a_1|, \ldots, |a_{N-1}|, |a_N|+1 \} $ is $ \geq $ each $ | a_n | $ ].
So $ \alpha_{j} := \sup\{a_j, a_{j+1}, \ldots \} $ are well-defined, bounded (and decreasing). Therefore they converge, to $ \alpha := \inf\{ \alpha_1, \alpha_2, \ldots \} $.
Let $\epsilon > 0 $. There is an $ N (=N_{\epsilon}) $ such that $ a_N, a_{N+1}, \ldots $ are in $ (a_N - \epsilon, a_N + \epsilon) $. So $ \alpha_N, \alpha_{N+1}, \ldots $ are in $ [ a_N - \epsilon, a_N + \epsilon ] $, and hence so is $ \alpha $.
Finally $ a_N, a_{N+1}, \ldots $ and $ \alpha $ are all in $ [ a_N - \epsilon, a_N + \epsilon ] $, ensuring each $ | a_N - \alpha |, |a_{N+1} - \alpha |, \ldots $ is $ \leq 2 \epsilon $.<|endoftext|>
TITLE: Prove that $\sin x+\sin y=1$ does not have integer solutions
QUESTION [17 upvotes]: Suppose $x$ and $y$ are angles measured in radians. Then how to show that the equation
$$\sin x+\sin y=1$$
does not have a solution $(x,y)\in\mathbb{N}\times\mathbb{N}$?
This question is prompted by curiosity. I don't have any ideas how it can be approached.
REPLY [22 votes]: No, and there is not even a solution for $(x,y)\in\mathbb Q\times \mathbb Q$.
We can quickly exclude $x=y$, which would require that $\sin x=\frac12$, but that is only true for $x=n\frac{\pi}{6}$ for certain nonzero integers $n$, and none of these produce a rational. Similarly we can easily exclude $x=0$, $y=0$, or $x=-y$.
Now, using Euler's formula, rewrite the equation to
$$ \tag{*} e^{ix} + e^{iy} - e^{-ix} - e^{-iy} = 2i\cdot e^0 $$
and apply the Lindemann–Weierstrass theorem which in one formulation says that the exponentials of distinct algebraic numbers are linearly independent over the algebraic numbers. But $\{\pm ix,\pm iy,0\}$ are all algebraic and (by our assumptions so far) different, so $\text{(*)}$ would be one of the linear relations that can't exist.
This argument generalizes to show that the only algebraic number that can be written as a rational combination of sines of algebraic (radian) angles is $0$.<|endoftext|>
TITLE: Understanding predicativity
QUESTION [9 upvotes]: In understanding the differences between impredicative and predicative definitions, I was able to understand impredicative as the following
A definition is said to be impredicative if it invokes over the set being defined or another set which contains the thing being defined. A prime example of this definition is the Russell's paradox
Now comparing to predicative definition, reading wiki it says that it entails constructing theories where quantification over lower levels results in variables of some new type, distinguished from the lower types that the variable ranges over.
The definition for predicative seems to be on a whole new level in terms of the description. An example of an predicative definition I sort of tried to connect the description with was Frege's first and second order calculus.
Could anyone perhaps offer a more simpler definition of a predicative definition, along with an example?
Thanks!
REPLY [3 votes]: As Professor Mummert has noted, the notion of a "predicative definition" is vague, although I would disagree that the same holds for "predicative mathematics". There are many complicated issues involved.
With respect to "definition", is it "obvious" that mathematics ought to be based upon "undefined primitives"? Russell and Whitehead made such a claim. You will find a detailed analysis with criticism of "Principia Mathematica" in the book $\underline{Definition}$ by Richard Robinson. Among the kinds of definitions one finds in non-foundational mathematics is "implicit definition". And, you will find that Professor Robinson does discuss them as legitimate forms of mathematical definition. When you think about the matter closely, you will realize that the "intensional definition" -- upon which Church introduced the lambda calculus -- is, in fact, a variation of implicit definition. The functions which Church introduced may be applied to themselves. Such functions are not representable in Zermelo-Fraenkel set theory because the axiom of foundation restricts that notion of set to being well-founded. Thus, the extension of a function in the sense of what Church did (that is, its representation as a set of ordered pairs) would have to appear as a domain element of the function. The axiom of foundation restricts against this infinite descending chains of membership relations.
Now, consider the definition,
$$\forall x \forall y ( x \subset y \leftrightarrow
( \forall z ( y \subset z \rightarrow x \subset z )
\wedge
\exists z ( x \subset z \wedge \neg y \subset z ) ) )$$
I use this form of sentence for both the set theory and the arithmetic (interpreted as proper divisor) in which I am interested. The syntax is clearly circular. Is it an impredicative definition?
According to a monograph by Moschovakis, a sentence of this nature appears to be impredicative if one naively attributes it to be a second-order sentence but is, in fact, recursively constructive. And, indeed, you will find a sentence of this form used in $\underline{Set Theory}$ by Kunen in his discussion of forcing. By contrast, the full-blown transfinite recursion is presented by Jech in the first edition of his book $\underline{Set Theory}$.
When I say that "predicative mathematics" does not suffer from the same problem as "predicative definition", it is because it originates with Russell and Whitehead with the express purpose of avoiding the circularity which they believed responsible for the many early paradoxes in set theory. So, one understands sets, first and foremost, as collections of individuals which are not, themselves, a collection. Then, one may form additional sets from those individuals and those initial sets of individuals. The next "type" will be sets formed of "objects" previously obtained through "set formation". I apologize for finishing with all of these quotes. But, in natural language it gets complicated. In combination with the axioms of union and power set, the axiom of foundation provides for this structure.
This kind of distinction may be found in Aristotle. For Aristotle, individuals are primary substance. Notions such as "species" and "genus" are substances in the sense that what they categorize are individuals. But, Aristotle refers to them as secondary substances.
One of the interesting things one discovers when reading Aristotle is that his only admonition against circularity is that against trying to simultaneously attribute truth to deductive reasoning and inductive reasoning at the same time. In modern mathematics, this seems to be related to the Lyndon interpolation theorem. The proof of that theorem uses negation normal forms. The significance of this is the restricted second-order language presented by Flum and Ziegler in the early 1980's. Its formation rules are governed by negation normal forms while its semantics coincide with first-order semantics on trivial topologies and discrete topologies. It is clear that predicative mathematics will avoid invoking both the universal quantifier and the existential quantifier simultaneously. It emphasizes the existential quantifier as being semantically prior to the universal quantifier. But, without some accommodation to the logic, the syntactic definition of individuals (as opposed to relations) merely on the basis of "properties" puts one at risk of attributing truth to both the universal quantifier and the existential quantifier simultaneously.
This is what the distinction between "predicative definition" and "impredicative definition" is trying to restrict. But, it is not at all clear that a classification of definitions is the appropriate vehicle. What is at stake is the claim that "mathematics is extensional" and the interpretation of quantifiers as collections which are objects. The circularity of intensional definitions and recursive definitions does not seem to always lead to paradox.<|endoftext|>
TITLE: $(1,1)$ tensor vs a linear transformation (matrix)
QUESTION [5 upvotes]: Take $d$-dimensional Vector space $V$ with Field $R$.
A typical linear algebra linear transformation $V \to V$ can be represented by a $d \times d$ matrix $A$ such that for some $v,w \in V$, $Av=w$.
I'm learning about tensors, and I understand that a $(1,1)$ tensor $T$ is a linear transformation $V^* \times V \to R$. I've read that such a $(1,1)$ tensor is equivalent to such a matrix.
However, I find it very difficult to imagine what $V^*$ (the dual space, i.e. set of all maps $V\to R$) has to do with a simple linear transformation from $R^d$ to $R^d$.
Moreover, the tensor components apparently are defined as $T^i_{\space \space j}=T(\epsilon_i, e^j)$, where $e^j, \epsilon _i$ are the $d$ bases of $V$ and $V^*$ respectively. This means that if we would write $T$ as a 2-dimensional array, it would have nothing to do with a matrix as in linear algebra.
So how are these two concepts connected?
This post is related to my question, but it doesn't really go into the difference between the matrix and tensor form.
REPLY [4 votes]: Given a linear map $\alpha:V\to V$ we can construct a bilinear form $\tau:V^*\times V\to R$, by taking $\tau(f,v)=f(\alpha v)$. (Note that $f(\alpha v)$ makes sense because $v\in V$ and $\alpha:V\to V$ so $\alpha v\in V$, and then $f\in V^*$ means $f:V\to R$, so $f(\alpha v)\in R$.)
Similarly given a bilinear form $\tau':V^*\times V\to R$ we can construct a map $\alpha': V\to V$ by noting that if $v\in V$ then $\tau'(-,v):V^*\to R$, and hence $\tau'(-,v)\in V^{**}$. Since $V$ is finite dimensional we have $V^{**}\cong V$ and hence we can define $\alpha'(v)$ to be the element of $V$ corresponding to $\tau'(-,v)$ in $V^{**}$. This means that $f(\alpha' v)=\tau'(f,v)$.
Hence given a map $V\to V$ we get a map $V^*\times V\to R$ and given a map $V^*\times V\to R$ we get a map $V\to V$ (and furthermore if we translate back and forth we end up where we started). So we can view linear maps $V\to V$ as "the same as" bilinear maps $V^*\times V\to R$.
Finally lets check the matrices are the same. Given a map $\alpha:V\to V$ its matrix is defined by $A^i_{\;j}=\epsilon_j(\alpha e^i)$, and given a map $\tau:V^*\times V\to R$ its matrix is defined by $T^i_{\;j}=\tau(\epsilon_j,e^i)$. So if we have $\tau(f,v)=f(\alpha v)$ then $T^i_{\;j}=\tau(\epsilon_j,e^i)=\epsilon_j(\alpha e^i)=A^i_{\;j}$.<|endoftext|>
TITLE: Double Sum with a Neat Result
QUESTION [6 upvotes]: Based on an interesting question here (second question), I have devised a similar one.
Evaluate the following double sum without expansion and substitution of standard sum-of-integers formula.
$$\sum_{x=1}^n\sum_{y=1}^n (n-x+y)$$
REPLY [3 votes]: Here's a slightly different way to look at it. First, we rewrite the sum as:
$$\sum_{x=1}^{n} \sum_{y=1}^{n} (n-x+y) = \sum_{(x,y)\in\{1,\dots,n\}^2} (n-x+y) \enspace,$$
with the usual meaning of $A^2$ as the set of all ordered pairs of elements of the set $A$. Then, we observe that for $x\neq y$, the sum of the two terms corresponding to the pairs $(x,y)$ and $(y,x)$ is:
$$(n-x+y) + (n-y+x) = 2n$$
Therefore, we can think of each of the two pairs as contributing $n$ to the sum. For $x=y$, the term corresponding to $(x,y)$ is just $n$.
Conclusion: every pair contributes $n$ to the sum, so the sum is $n^2\cdot n= n^3$.<|endoftext|>
TITLE: Are there further gaps in the Eisenstein primes?
QUESTION [9 upvotes]: I recently played around with Eisenstein primes a bit (in an admittedly very amateurish way) and noticed among other things that there are no primes on the hexagonal ring that goes through (8,0) on the Eisenstein grid of the complex plane:
I thought this was a neat feature of the distribution of the primes and started looking for further such gaps. To my astonishment I haven't been able to find a single such gap up to at least a "radius" of 40,000,000. So now I'm wondering whether 8 is indeed the only such gap (ignoring the trivial cases of 0 and 1), or whether there might be further gaps at larger radii.
My Google efforts haven't turned up anything on this and I'm not sure how one would go about answering the question short of keeping the search running in hopes of finding another gap (which of course will never yield the answer "no further gaps exist"). I assume one could make a statistical argument based on the density of the Eisenstein primes, but I'm not sure how the prime number theorem applies to them.
REPLY [4 votes]: It seems to me that these problems (both in case of Eisenstein and Gaussian primes) are really hard and outside of today's possibilities. I've checked all possible squares (in case of Gaussian primes) and hexagons (in case of Eisenstein primes) of a size up to $10^9$ and the only "primeless" polygon was the hexagon mentioned by OP and the one eight times smaller. However the proof in case of the Gaussian primes would be equivalent to showing that for every $n$ there exists $0 \le k \le n$ such that $n+ki$ is a gaussian prime which is (almost) equivalent of proving that $n^2+k^2$ is a prime number. It is even unclear why such a $k$ should exist and proving that it should be of a size $O(n)$ seems to be even harder task. Eisenstein primes have similar problems but now (at least for hexagons passing through even integers) the problem is (almost) equivalent to finding $0 \le k \le n$ such that $3n^2+k^2$ is a prime number. I am saying almost because for $k=0$ the criteria are different but it seems to not matter, as one can still find primes with $k>0$.<|endoftext|>
TITLE: Associativity of concatenation
QUESTION [6 upvotes]: Prove that the following operator is associative for $b\in \Bbb N$
$$x||y = x\cdot b^{1+\lfloor\log_{b}{y}\rfloor}+y$$
One thing that you can notice is that it is the concatenation operator. However, you are not allowed to use this fact. In first order logic, we refuse to attach any meaning to any objects and try to prove things starting with axioms.
REPLY [4 votes]: I will be working off the formula
$$(x || y) = x * b^{\lfloor\log_b(y)\rfloor + 1} + y$$
You can see that this should be the case because with $b = 10$, $y \in [10,99]$ should multiply $x$ by $100$.
In the following, I will assume $b = 10$ and write $\log(x)$ for $\log_{10}(x)$. (The proof generalizes immediately to any $b$). The proof does not require case analysis, only applying a couple elementary properties of the floor function.
The more interesting point is perhaps about whether concatenation is 'unmathematical.' I'm not sure what you mean by 'unmathematical'. In general we want to be able to define concatenation of words for arbitrary symbolic systems (alphabets). I suppose you (or this youtube poster) are taking issue with the dependency on indexing (and knowing the length of the second argument) in the usual definition of concatenation.
We have by definition of the operator $||$
$\big(x || y \big) = 10x * 10^{\lfloor\log(y)\rfloor} + y$
$\big(y || z\big) = 10y * 10^{\lfloor\log(z)\rfloor} + z$
So $\big(x || y\big) || z =$ $$10*\big(x || y\big)*10^{\lfloor\log(z)\rfloor} + z =$$
$$(100x * 10^{\lfloor\log(y)\rfloor} + 10y)*10^{\lfloor\log(z)\rfloor} + z = $$
$$100x*10^{\lfloor\log(y)\rfloor + \lfloor\log(z)\rfloor} + 10y*10^{\lfloor\log(z)\rfloor} + z$$
Meanwhile $x||\big(y||z\big) = $
$$x || \big(10y * 10^{\lfloor\log(z)\rfloor} + z\big) =$$
$$10x * 10^{\lfloor \log (10y * 10^{\lfloor\log(z)\rfloor}) \rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z = $$
$$10x * 10^{\lfloor 1 + \log(y) + \lfloor\log(z)\rfloor \rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z$$
Consider that $\lfloor 1 + a + \lfloor b \rfloor \rfloor = 1 + \lfloor b \rfloor + \lfloor a \rfloor$, since $1 + \lfloor b \rfloor$ is an integer.
Applying this to the last line in the expansion of $x||\big(y||z\big)$ shows $x||\big(y||z\big) = $
$$10x * 10^{1 + \lfloor \log(y) \rfloor + \lfloor\log(z)\rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z = $$
$$100x * 10^{\lfloor \log(y) \rfloor + \lfloor\log(z)\rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z$$.
This shows that $x||\big(y||z\big) = \big(x||y\big)||z$ as desired, and we have proved that your formal "concatenation" operator satisfies associativity!
Now, have we actually mathematically captured concatenation?
The only problem to get around is zeros. $x || 0$ is not well-defined for us, and $0 || y$ returns $y$, so it isn't really concatenation in a string sense. Moreover if a number contains $0$ in it's middle, then treating it as a word we see that is the concatenation of $x||y$ where $y$ has a leading zero, and the formula breaks. The problems are that our formula deals with numbers, but concatenation deals with strings. In the former, leading zeros don't matter, but in the latter, they do.
This isn't ultimately a problem though. We can use this formula to define concatenation for any finite symbolic system.
Let $||$ be defined as above on any base $b$ number system with the explicit definition $x || 0 = x$ for all $x$.
Given an alphabet $A$ consisting of $n$ characters, we define concatenation on two words $u,v$ of $A$ as follows:
$$u || v = \phi^{-1}\big(\phi(u) || \phi(v)\big)$$
where $\phi$ is any bijection $\phi: A \rightarrow \{1,\ldots,n\}$ extended to act on words of $A$ elementwise, with its image on the empty word explicitly defined to be $0$. Note that $\phi$ is then a bijection between words of $A$ and numbers base $n+1$ which either are zero or have no zeros as digits.
So the operator of concatenation on words of any alphabet can be defined quite rigorously without making reference to the lengths or elements of the words. We only require for this that the alphabet be finite, because our formula for concatenation in a base $b$ number system breaks down for $b$ infinite.<|endoftext|>
TITLE: Proof: Inequality in Mercer's theorem
QUESTION [6 upvotes]: In the outline of the Mercer's theorem proof there is an inequality assumed without any explanation:
$$\sum_{i=0}^{\infty} \lambda_i \vert e_i(t) e_i(s) \vert \le \sup_{x \in [a,b]} \vert K(x,x)\vert^2$$
Why does this need to hold?
REPLY [3 votes]: The Parseval-Bessel Theorem leads to
$$f= \sum_{j=1}^\infty (f,e_j)\, e_j~~ \text{for every }~~f\in L^2(a,b) \tag{1}$$
Which implies by linearity and continuity along with the fact that $Ke_j =\lambda_je_j$ that
$$ T_Kf =Kf =\sum_{j=1}^\infty \lambda_j\,(f,e_j)\, e_j~~ \text{for every }~~f\in L^2(a,b) \tag{2}$$
Hence, one can carefully check starting with finite summation and Bessel inequality
$$ (Kf,f)= \sum_{j=1}^\infty \lambda_j\,|(f,e_j)|^2 \tag{3}$$
We also set the Kernel
$$K_n(t,s) = \sum_{j=1}^{n} \lambda_j e_j(t) e_j(s)\tag{Kn}$$
then,
$$TK_nf(t) = \sum_{j=1}^{n} \lambda_j e_j(t) \int_{a}^{b}f(s)e_j(s)\,ds = \sum_{j=1}^{n} \lambda_j (f\, ,e_j)e_j(t) $$
hence, $$ (K_nf,f)= \sum_{j=1}^{n} \lambda_j |(f\, ,e_j)|^2.$$
Next we consider the truncated Kernel
$$ R_n(t,s) =K(t,s)- \sum_{j=1}^n \lambda_j\,e_j(t)\, e_j(s)\tag{4}$$
It derives from the foregoing that
$$ (R_nf,f)= \sum_{j=n+1}^\infty \lambda_j\,|(f,e_j)|^2\ge 0~~\text{for every }~~f\in L^2(a,b) \tag{5}$$
i.e $(R_nf,f)\ge0$ then there is one result claiming that $R_n(t,t)\ge0$ for almost every $t\in(a,b)$
which leads to
$$ R_n(t,t) =K(t,t)- \sum_{j=1}^n \lambda_j\,e_j(t)\, e_j(t)\ge 0 \tag{6}$$
ie
$$ \sum_{j=1}^n \lambda_j\,e_j(t)\, e_j(t)\le K(t,t) \tag{7}$$
This holds true for abitratry $n\in\mathbb{N}$. whence,
$$ \sum_{j=1}^\infty \lambda_j\,e_j(t)\, e_j(t)\le \sup_{t\in [a,b]} K(t,t) $$
Applying Cauchy-Schwartz inequality we get,
\begin{split}
\Big|\sum_{j=1}^\infty \lambda_j\,e_j(s)\, e_j(t)\Big|^2 &\le& \Big(\sum_{j=1}^\infty \lambda_j\,e_j(s)\, e_j(s)\Big ) \Big(\sum_{j=1}^\infty \lambda_j\,e_j(t)\, e_j(t)\Big)\\
&\le& \sup_{t\in [a,b]} K^2(t,t)
\end{split}
Prove of the Claim in the complex case
suppose there is $x_0$ such that $K(x_0,x_0)<0$ then there are $c,d $
such that
$a\le c
TITLE: Show that in Lyapunov equation $A^TQ+QA=-I$, the matrix $Q$ is positive definite.
QUESTION [5 upvotes]: Let $A$ be matrix whose eigenvalues all have negative real parts. Define $Q=\int^{\infty}_0 B(t)dt$ where $B(t)=e^{A^Tt}e^{At}$. Prove that $Q$ is symmetric and positive definite.
This question is related to the corresponding Lyapunov equation $A^TQ+QA=-I$.
By the above we know that $B(t)^T=B(t)$ and $\forall x \neq 0. x^TB(t)x>0$. Therefore:
\begin{align}
-I
&=\lim_{\tau \to \infty} B(\tau) -I\\
&=\lim_{\tau \to \infty} \int^{\tau}_0\frac{d B(t)}{dt} \\
&= \lim_{\tau \to \infty} \Big( A^T\int^{\tau}_0B(t)dt+\int^{\tau}_0B(t)dt\ A \Big)\\
&=A^TQ+QA\\
\end{align}
However I am confused on how to use these facts to show that $Q$ is symmetric and positive definite.
REPLY [3 votes]: The key is to note that any (pointwise) constant linear transformation commutes with integration. For example, we can show that $Q$ is symmetric since
$$
Q^T = \left[\int B(t)\right]^T = \int[B(t)]^T =
\int[e^{At}]^T[e^{A^Tt}]^T =
\int e^{A^Tt}e^{At} = \int B(t) = Q
$$
similarly, show that $x^TQx > 0$ so that $Q$ is positive definite. Note in particular that
$$
x^TB(t)x = \|e^{At}x\|^2
$$
Moreover: if $x \neq 0$, $t \mapsto \|e^{At}x\|^2$ is necessarily a continuous, positive-valued function.<|endoftext|>
TITLE: Number of normals to a parabola from a given point
QUESTION [6 upvotes]: I know that from any point a maximum of three normals could be drawn to a parabola because the equation of normal is cubic.
But I want to know the condition on the point for the number of normals
REPLY [10 votes]: Claude's technique can be extended slightly to find the the points and normal lines given a particular point off of the parabola. Using this, I obtained an animation of some of the normals.<|endoftext|>
TITLE: A summation involving $\arctan$, $\pi$ and Hyperbolic function
QUESTION [6 upvotes]: Prove that
$$\sum_{n\in\mathbb{Z}}\arctan\left(\frac{\sinh(1)}{\cosh(2n)}\right)=\frac{\pi}{2}$$
Writing $$\dfrac{\sinh(1)}{\cosh(2n)}=\dfrac{e^{1}-e^{-1}}{e^{2n}+e^{-2n}}$$
I tried to use the identity
$$\arctan\left(\frac{a_1}{a_2}\right)+\arctan\left(\frac{b_1}{b_2}\right)=\arctan\left(\frac{a_1b_2+
a_2b_1}{a_2b_2-a_1b_1}\right)$$
with a suitable choice of $a_1,a_2,b_1,b_2$ but I haven't been able to find a telescopic sum.
REPLY [8 votes]: That is a telescopic sum in disguise. We may notice that:
$$\arctan\tanh(n+1)-\arctan\tanh(n-1) = \arctan\left(\frac{\tanh(n+1)-\tanh(n-1)}{1+\tanh(n-1)\tanh(n+1)}\right) $$
equals $\arctan\left(\frac{\sinh(2)}{\cosh(2n)}\right)$ and:
$$ \arctan\left(\frac{\sinh(1)}{\cosh(2n)}\right) = \arctan\tanh\left(n+\frac{1}{2}\right)-\arctan\tanh\left(n-\frac{1}{2}\right). $$
You may easily draw your conclusions now.<|endoftext|>
TITLE: Showing matrices in $SU(2)$ are of form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$
QUESTION [8 upvotes]: Matrices $A$ in the special unitary group $SU(2)$ have determinant $\operatorname{det}(A) = 1$ and satisfy $AA^\dagger = I$.
I want to show that $A$ is of the form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$ with complex numbers $a,b$ such that $|a|^2+|b|^2 = 1$.
To this end, we put $A:= \begin{pmatrix} r & s \\ t & u\end{pmatrix}$ and impose the two properties.
This yields \begin{align}\operatorname{det}(A) &= ru-st \\ &= 1 \ ,\end{align}
and
\begin{align}
AA^\dagger &= \begin{pmatrix} r & s \\ t & u\end{pmatrix} \begin{pmatrix} r^* & t^* \\ s^* & u^* \end{pmatrix} \\&= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\
&= \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \ .\\
\end{align}
The latter gives rise to
\begin{align}
|r|^2+|s|^2 &= 1 \\
&= |t|^2+|u|^2 \ ,
\end{align}
and
\begin{align}
tr^*+us^* &= 0 \\
&= rt^*+su^* \ .
\end{align}
At this point, I don't know how to proceed. Any hints would be appreciated.
@Omnomnomnom's remark
\begin{align}
A A^\dagger &= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\
&= \begin{pmatrix} |r|^2+|t|^2 & sr^* +ut^* \\ rs^*+tu^* & |s|^2 + |u|^2\end{pmatrix} = A^\dagger A \ ,
\end{align}
gives rise to
$$
|t|^2 = |s|^2 \\
|r|^2 = |u|^2
$$
and
$$
AA^\dagger :\begin{pmatrix}
rt^* +su^* = sr^* +ut^* \\
tr^*+us^* = rs^*+tu^*
\end{pmatrix}: A^\dagger A $$
At this point, I'm looking in to find a relation between $t,s$ and $r,u$ respectively.
REPLY [11 votes]: The condition $A^{\ast}A=I$ says that $A$ has orthonormal columns.
Suppose the first column is $v=[\begin{smallmatrix}a\\b\end{smallmatrix}]$. It must have unit norm, so $|a|^2+|b|^2=1$. What can the second column be? It must be orthogonal to the first, which means it must be in the complex one-dimensional orthogonal complement. Thus, if $w$ is orthogonal to $v$, then the possibilities for the second column are $\lambda w$ for $\lambda\in\mathbb{C}$. Since $\det[v~\lambda w]=\lambda\det[v~w]$, only one value of $\lambda$ will make the determinant $1$, hence the second column is unique. So it suffices to check $w=[-b ~~ a]^{\ast}$ works, which is natural to check because in ${\rm SO}(2)$ the second column would be $[-b~~a]^T$.<|endoftext|>
TITLE: Do any two coprime factors of $x^n-1$ over the $p$-adic integers $\mathbb{Z}_p$ which remain coprime over $\mathbb{F}_p$ generate comaximal ideals?
QUESTION [6 upvotes]: Let $f,g$ be distinct irreducible factors of $x^n-1$ over $\mathbb{Z}_p[x]$ (polynomials over $p$-adic integers). Suppose $\overline{f},\overline{g}$ are coprime in $\mathbb{F}_p[x]$ - thus, the ideal generated by them $(\overline{f},\overline{g}) = 1$ in $\mathbb{F}_p[x]$. Must $(f,g) = 1$ in $\mathbb{Z}_p[x]$?
Note that $f,g$ are certainly coprime, but $\mathbb{Z}_p[x]$, coprime doesn't mean comaximal (e.g. $p,x$ are coprime but not comaximal).
REPLY [4 votes]: Suppose $(f,g)\ne 1$, then they are contained in some maximal ideal $m\supset (f,g)$, but the maximal ideals of $\mathbb{Z}_p[x]$ are precisely the ideals of the form $(p,h(x))$, where $h(x)$ is irreducible and remains irreducible mod $p$. Thus, $\mathbb{Z}_p[x]/m\cong \mathbb{F}_p[x]/(\overline{h})$. This implies that $(\overline{h})\supset(\overline{f},\overline{g})$, but since $\overline{f},\overline{g}$ are comaximal, they generate the unit ideal, and so $\overline{h}$ must be a unit, contradicting the fact that $h$ is irreducible mod $p$.
This implies that $(f,g) = 1$.<|endoftext|>
TITLE: Prove that this iteration cuts a rational number in two irrationals $\sum_{n=0}^\infty \frac{1}{q_n^2-p_n q_n+1}+\lim_{n \to \infty} \frac{p_n}{q_n}$
QUESTION [6 upvotes]: For any $1 \leq p1$ they are increasing (because $q_n$ is strictly increasing and it 'helps' $p_n$ after the first step).
For $n \to \infty$ we have $p_n \to \infty$ and $q_n \to \infty$, thus:
$$\frac{p_{n+1}}{q_{n+1}}=\frac{(q_n-p_n)(p_nq_n-1)}{q_n(q_n^2-p_n q_n+1)} \approx \frac{p_n}{q_n}$$
It is apparent the limit exists.
The limit for $A$ exists because the sequence $q_n^2-p_n q_n+1$ grows much faster than $n^2$ and the sum obviously converges.
Update
A little something on a closed form. The system of recurrence relations can be rewritten as a single recurrence relation, using:
$$p_n=q_n+\frac{1}{q_n}-\frac{q_{n+1}}{q_n^2}$$
Then we have a second order recurrence relation:
$$q_{n+2}=q_{n+1}(q_{n+1}q_n+1)+\frac{q_{n+1}^3}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right)$$ $$q_0=q_0, \qquad q_1=q_0(q_0^2-q_0 p_0+1)$$
Or a more symmetric form:
$$\frac{q_{n+2}}{q_{n+1}}=q_{n+1}q_n+1+\frac{q_{n+1}^2}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right)$$
If we find a closed form for it (which I'm not sure exists) we can take the limit and find the closed form for $B$.
We also have a more simple looking relation (but it still requires us to know $q_n$):
$$\frac{p_n}{q_n}=1+\frac{1}{q_{n-1}^2}-\frac{q_{n-1}}{q_n}-\frac{q_n}{q_{n-1}^3}$$
And in fact, we can also write $A$ in terms of $q_n$:
$$A=\sum_{n=0}^\infty \frac{q_n}{q_{n+1}}$$
Update 2
Getting rid of some unnecessary parts, we can reformulate the problem:
Set some $q_1>q_0>0$. Then we can define a second order recurrence:
$$q_{n+2}=q_{n+1}(q_{n+1}q_n+1)+\frac{q_{n+1}^3}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right)$$
With the following property: $$L(q_0,q_1)-S(q_0,q_1)=\lim_{n \to \infty} \frac{q_{n+1}}{q_n^3}- \sum_{n=0}^\infty \frac{q_n}{q_{n+1}}=\frac{q_1-q_0}{q_0^3}$$
Can we find a closed form for the recurrence? Or separately for the limit $L$ or the sum $S$ above?
Note that for the limit $L$ to be finite we need to have as $ n \to \infty$:
$$q_n \asymp C \cdot a^{3^n}$$
For example we have:
$$S(1,2)=0.645953147800624278311945190231458547= \\ = \frac{1}{2}+\frac{1}{7}+\frac{1}{323}+\frac{1}{33657247}+\frac{1}{38127274806076464952763}+\dots$$
No closed form for this number either, however look at the denominator sequence - all the numbers end with $3$ or $7$. This pattern continues as far as I can see.
REPLY [2 votes]: As an answer for the question in the title I propose the following (using the results from the OP):
$$A=\sum_{n=0}^\infty \frac{q_n}{q_{n+1}} \tag{1}$$
We have:
$$\frac{q_{n+2}}{q_{n+1}}=q_{n+1}q_n+1+\frac{q_{n+1}^2}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right) \tag{2}$$
Set $a_n=\frac{q_n}{q_{n-1}}$ and $b_n=q_{n-1}q_{n-2}+1$, then we have:
$$a_n=q_{n-1}q_{n-2}+1+a_{n-1}(a_{n-1}-1):=a_{n-1}(a_{n-1}-1)+b_{n-1}$$
Thus, according to this paper: The Approximation of Numbers as Sums of Reciprocals, the sum in $(1)$ is the greedy expansion of the number $A$:
$$A=\sum_{n=1}^\infty \frac{1}{a_n}$$
According to the paper, every such expansion for a real number has the form:
$$x=\frac{1}{a_1}+\frac{1}{a_2}+\dots$$
$$a_{k+1}=a_k(a_k-1)+b_k,~~~a_1 \geq 2,~~b_k > 1,~~~~a_k,b_k \in \mathbb{N}$$
All of the requirements are met. (To prove that $a_n$ are all integers we only need to look at the initial definition of $q_n$).
And since the greedy expansion for a rational number is finite, but the sequence $a_n$ is not, we have proved that $A$ is irrational.<|endoftext|>
TITLE: Is this formula for $\frac{e^2-3}{e^2+1}$ known? How to prove it?
QUESTION [29 upvotes]: I found an interesting infinite sequence recently in the form of a 'two storey continued fraction' with natural number entries:
$$\frac{e^2-3}{e^2+1}=\cfrac{2-\cfrac{3-\cfrac{4-\cdots}{4+\cdots}}{3+\cfrac{4-\cdots}{4+\cdots}}}{2+\cfrac{3-\cfrac{4-\cdots}{4+\cdots}}{3+\cfrac{4-\cdots}{4+\cdots}}}$$
The numerical computation was done 'backwards', starting from some $x_n=1$ we compute:
$$x_{n-1}=\frac{a_n-x_n}{a_n+x_n}$$
And so on, until we get to $x_0$. The sequence converges for $n \to \infty$ if $a_n>1$ (or so it seems).
For constant $a_n$ we seem to have quadratic irrationals, for example:
$$\frac{\sqrt{17}-3}{2}=\cfrac{2-\cfrac{2-\cfrac{2-\cdots}{2+\cdots}}{2+\cfrac{2-\cdots}{2+\cdots}}}{2+\cfrac{2-\cfrac{2-\cdots}{2+\cdots}}{2+\cfrac{2-\cdots}{2+\cdots}}}$$
For $a_n=2^n$ we seems to have:
$$\frac{1}{2}=\cfrac{2-\cfrac{4-\cfrac{8-\cdots}{8+\cdots}}{4+\cfrac{8-\cdots}{8+\cdots}}}{2+\cfrac{4-\cfrac{8-\cdots}{8+\cdots}}{4+\cfrac{8-\cdots}{8+\cdots}}}$$
I found no other closed forms so far, and I don't know how to prove the formulas above. How can we prove them? What is known about such continued fractions?
There is another curious thing. If we try to expand some number in this kind of fraction, we can do it the following way:
$$x_0=x$$
$$a_0=\left[\frac{1}{x_0} \right]$$
$$x_1=\frac{1-a_0x_0}{1+a_0x_0}$$
$$a_1=\left[\frac{1}{x_1} \right]$$
However, this kind of expansion will not give us the above sequences. We will get faster growing entries. Moreover, the fraction will be finite for any rational number. For example, in the list notation:
$$\frac{3}{29}=[9,28]$$
You can easily check this expansion for any rational number.
As for the constant above we get:
$$\frac{e^2-3}{e^2+1}=[1,3,31,74,315,750,14286,\dots]$$
Not the same as $[1,2,3,4,5,6,7,\dots]$ above!
We have similar sequences growing exponentially for any irrational number I checked.
$$e-2=[1,6,121,284,1260,3404,25678,\dots]$$
$$\pi-3=[7,224,471,2195,10493,46032,119223,\dots]$$
By the way, if we try CF convergents, we get almost the same expansion, but finite:
$$\frac{355}{113}-3=[7,225]$$
$$\frac{4272943}{1360120}-3=[7,224,471,2195,18596,227459,\dots]$$
So, the convergents of this sequence are not the same as for the simple continued fraction, but similar.
Comparing the expansion by the method above and the closed forms at the top of the post, we can see that, unlike for simple continued fractions, this expansion is not unique. Can we explain why?
Here is the Mathematica code to compute the limit of the first fraction:
Nm = 50;
Cf = Table[j, {j, 1, Nm}];
b0 = (Cf[[Nm]] - 1)/(Cf[[Nm]] + 1);
Do[b1 = N[(Cf[[Nm - j]] - b0)/(Cf[[Nm - j]] + b0), 7500];
b0 = b1, {j, 1, Nm - 2}]
N[b0/Cf[[1]], 50]
And here is the code to obtain the expansion in the usual way:
x = (E^2 - 3)/(E^2 + 1);
x0 = x;
Nm = 27;
Cf = Table[1, {j, 1, Nm}];
Do[If[x0 != 0, a = Floor[1/x0];
x1 = N[(1 - x0 a)/(x0 a + 1), 19500];
Print[j, " ", a, " ", N[x1, 16]];
Cf[[j]] = a;
x0 = x1], {j, 1, Nm}]
b0 = (1 - 1/Cf[[Nm]])/(1 + 1/Cf[[Nm]]);
Do[b1 = N[(1 - b0/Cf[[Nm - j]])/(1 + b0/Cf[[Nm - j]]), 7500];
b0 = b1, {j, 1, Nm - 2}]
N[x - b0/Cf[[1]], 20]
Update
I have derived the forward recurrence relations for numerator and denominator:
$$p_{n+1}=(a_n-1)p_n+2a_{n-1}p_{n-1}$$ $$q_{n+1}=(a_n-1)q_n+2a_{n-1}q_{n-1}$$
They have the same form as for generalized continued fractions (a special case). Now I understand why the expansions are not unique.
REPLY [2 votes]: For the first one, you could write \begin{equation}
f(n) = \frac{n-f(n+1)}{n+f(n+1)}
\end{equation}
Then you suggest \begin{equation}
f(2) = \frac{e^2-3}{e^2+1}
\end{equation}
But this then gives \begin{align}
f(1) = \frac{2}{e^2-1}\\
f(3) = \frac{4}{e^2-1} \\
f(4) = \frac{3e^2-15}{e^2+3}\\
f(5) = \frac{-2e^2+18}{e^2-3}
\end{align}
I don't know if there is a recurrence relation that solves this, but you have a few more closed forms...
The second one we have \begin{equation}
g(2) = \frac{2 - g(2)}{2+g(2)}
\end{equation}
so we can solve the quadratic $x^2+3x-2$ to get $(\sqrt{17}-3)/2$.
For the third one, we have \begin{equation}
h(n) = \frac{n-h(2n)}{n+h(2n)}
\end{equation}
using the trial version of $h(2)=1/2$, we get \begin{align}
h(4)=\frac{2}{3}\\
h(8)=\frac{4}{5}\\
h(16)=\frac{8}{9}
\end{align}
then it is likely that \begin{equation}
h(n)=\frac{n}{n+2}
\end{equation}
as this satisfies the recurrence and that $h(2)=1/2$.<|endoftext|>
TITLE: Is there a name for the center of a line?
QUESTION [14 upvotes]: Is there a name for the center point for a line?
For example:
---------o---------
If the dashes represent a straight line and the O represents the center of that line, what would the name for that center point be?
REPLY [31 votes]: A line goes forever in both directions, so it has no center. If you have a line segment - a part of a line with two definite ends - then the name is "midpoint."<|endoftext|>
TITLE: Set of quadratic expressions $nx^2+m$ whose union is all integers?
QUESTION [9 upvotes]: Is there a set of quadratic equations whose union is equal to the counting integers (1,2,3,4...) but all pairs of intersections are empty. An example of linear equations which satisfy this is simply:
$$
\begin{align*}
S_1 &= \{1,3,5,7,\ldots,2n+1\}\\
S_2 &= \{2,4,6,8,\ldots,2n+0\}\\[0.2in]
S_1 \cup S_2 &= \{1,2,3,4,5,6,7,8,9,\ldots\}\\
S_1 \cap S_2 &= \varnothing
\end{align*}$$
It is easy to construct these for linear functions, but is it impossible to do the same for functions of the form $f_{mn}(x)=nx^2+m$?
REPLY [2 votes]: Proposition: Let $c\ne 0$ be any integer. Then there exists a multiplier $B\ge 1$ such that $(Bn)^2+c$ is never a square (inclusive of $0$) for any $n\ge 1$.
Proof: There are only finitely many ways to write $c$ as the product of two integers. Choose $B$ so that $2B$ exceeds the maximum difference between any complementary factors of $c$. Then $(m-Bn)(m+Bn) = c$ has no integer solutions with $n\ge 1$.
Lemma: Let $A$ be any finite set of integers, and $c \ne A$. There exists a $B\ge 1$ such that the set $\{(Bn)^2+c: n > 0\}$ is disjoint from the union of quadratic progressions $\{ n^2 + a : n \ge 0, a \in A \}$.
Proof: Apply the proposition to each value $c-a$, and take $B$ to be the LCM of all the (finitely many) multipliers so obtained.
Theorem: There exists an infinite sequence $(B_k, c_k) : k \ge 0$ with $B_k > 0$ such that the quadratic progressions $\{ B_k n^2 + c_k : n \ge 0 \}$ form a partition of $\mathbb N$.
Proof: Start with $(B_0,c_0) = (1,1)$. We proceed inductively: suppose that $(B_k, c_k)$ have been chosen for all $k 0 \}$ is disjoint from all previous progressions, and also $\{ B_m n^2 + c_m : n = 0 \}$ is disjoint by choice of $c$. Thus we may construct an infinite sequence $(B_k,c_k)$ in this manner. Finally, this is certain to cover all of $\mathbb N$ since we chose $c$ minimally, so that the first $m$ progressions necessarily cover $\{1,\ldots,m\}$.<|endoftext|>
TITLE: Differential of a Map
QUESTION [5 upvotes]: I have the following map that embeds the Torus $T^2$ into $\mathbb{R}^3$:
$$f(\theta, \phi)=(cos\theta(R+rcos(\phi)),sin\theta(R+rcos(\phi)), rsin\phi)$$
noting that $0
TITLE: What is $\mathbb{Z}[x]/(x,x^2+1)$ isomorphic to?
QUESTION [6 upvotes]: Consider the quotient ring $\mathbb{Z}[x]/(x,x^2+1)$. Taking the quotient by $(x)$ first, we get a ring that is isomorphic to $\mathbb{Z}$ by setting the relation $x=0$. Applying the relation, $(x^2+1)$ becomes $(1)$, so the quotient ring is isomorphic to $\mathbb{Z}/(1)=\{0\}$.
Taking the quotient by $(x^2+1)$ first, we get a ring that is isomorphic to $\mathbb{Z}[i]$ by setting the relation $x^2=1$ (or equivalently, $x=i$). Applying the relation, $(x)$ becomes $(i)$, so the quotient ring is isomorphic to $\mathbb{Z}[i]/(i)\approx\mathbb{Z}$.
Which approach, if either, is correct?
REPLY [5 votes]: You could also note that $1=(x^2+1)-x(x)\in (x,x^2+1)$, so $(x,x^2+1)=\mathbb{Z}[x]$. From this point of view, the quotient is evidently 0.<|endoftext|>
TITLE: Can endpoints be local minimum?
QUESTION [8 upvotes]: My textbook defines local maximum as follows:
A function $f$ has local maximum value at point $c$ within its
domain $D$ if $f(x)\leq f(c)$ for all $x$ in its domain lying in some
open interval containing $c$.
The question asks to find any local maximum or minimum values in the function
$$g(x)=x^2-4x+4$$ in the domain $1\leq x<+\infty$.
The answer at the back has the point $(1,1)$, which is the endpoint.
According to the definition given in the textbook, I would think endpoints cannot be local minimum or maximum given that they cannot be in an open interval containing themselves. (ex: the open interval $(1,3)$ does not contain $1$). Where am I wrong?
REPLY [3 votes]: I think fundamentally the comments are right, and you should speak with your teacher to confirm definitions and expectations. But there's also a point to make about topology here, which could justify the book's definition and answer as consistent.
The definition of local maximum you gave is:
A function $f$ has a local maximum at point $c$ within its domain $D$ if $f(x) \leq f(c)$ for all $x$ in its domain lying in some ** open ** interval containing $c$.
If you interpret this as saying that the interval can come from $\mathbb{R}$, and is not restricted to $D$, then you have no problem, as others have pointed out. But like you I am thinking about being restricted to $D$ and my instinct is to think only about intervals in $D$. This can still be ok, if we just alter our interpretation of "open" a little bit (in a natural way)...
Now, whenever we say "open" we're really saying "open with respect to ** insert topology here ** ." A lot of the time it's obvious from context or the textbook has established a practice of contextual implication, but in this case (without knowing your book) I'd argue there are two reasonable interpretations:
We might be talking open intervals with respect to the standard topology on $\mathbb{R}$ (which is what you've probably been using in your class), but
since we're restricting our attention to a domain $D \subset \mathbb{R}$, it's also pretty normal to talk about a different topology, called the subset topology on $D$ (induced by the standard topology on $R$).
In the subset topology on $D \subset \mathbb{R}$ (induced by the standard topology), a set $S$ is open if and only if $S$ is the intersection $D \cap X $, with $X$ open in $\mathbb{R}$ with respect to the standard topology on $\mathbb{R}$.
We're often more interested in the subset topology than the usual topology on the whole space just because of situations like the one you're in, in which a definition doesn't work quite like you expect when $D \not= \mathbb{R}$.
So let's work with a slightly different definition of local maximum:
A function $f$ has a local maximum at point $c$ within its domain $D$ if $f(x) \leq f(c)$ for all $x$ in its domain lying in some interval $I$ containing $c$ such that $I$ is open with respect to the subset topology on $D$.
Now back to your case. Let $D = [1, \infty)$. For any $a > 1$, we have that
$$[1,a) = D \cap (-a,a)$$
Since $(-a,a)$ is open in $\mathbb{R}$ with respect to the standard topology, $[1,a)$ is open in $D$ with respect to the subset topology on $D$. This intuitively makes sense, because if you were an ant walking on $f(D)$, when you came to $f(1)$ you'd have nowhere to go but down.<|endoftext|>
TITLE: Where is the absolute value when computing antiderivatives?
QUESTION [23 upvotes]: Here is a typical second-semester single-variable calculus question:
$$ \int \frac{1}{\sqrt{1-x^2}} \, dx $$
Students are probably taught to just memorize the result of this since the derivative of $\arcsin(x)$ is taught as a rule to memorize. However, if we were to actually try and find an antiderivative, we might let
$$ x = \sin \theta \quad \implies \quad dx = \cos \theta \, d \theta $$
so the integral may be rewritten as
$$ \int \frac{\cos \theta}{\sqrt{1 - \sin^2 \theta}} \, d \theta = \int \frac{\cos \theta}{\sqrt{\cos^2 \theta}} \, d \theta $$
At this point, students then simplify the denominator to just $\cos \theta$, which boils the integral down to
$$ \int 1 \, d \theta = \theta + C = \arcsin x + C $$
which is the correct antiderivative. However, by definition, $\sqrt{x^2} = |x|$, implying that the integral above should really be simplified to
$$ \int \frac{\cos \theta}{|\cos \theta|} \, d \theta = \int \pm 1 \, d \theta $$
depending on the interval for $\theta$. At this point, it looks like the answer that we will eventually arrive at is different from what we know the correct answer to be.
Why is the first way correct even though we're not simplifying correctly, while the second way is... weird... while simplifying correctly?
REPLY [8 votes]: Let $\operatorname{sgn}(x)$ be the function that takes values $-1, 0, 1$ depending on the sign of $x$.
For the sake of generality, if you have two variables $x$ and $\theta$ related by $x = \sin \theta$ and the square root symbol means to always take the positive square root, then the opening post is correct: the right formula relating the differentials is
$$ \frac{\mathrm{d}x}{\sqrt{1 - x^2}} = \operatorname{sgn}(\cos(\theta)) \mathrm{d} \theta $$
Now, one thing to note is that the domain of these functions excludes $x = \pm 1$; similarly, it excludes all values of $\theta$ for which $\cos(\theta) = 0$.
On this domain, $\operatorname{sgn}(\cos(\theta))$ is locally constant. In this situation, the domain consists of a series of completely disjoint intervals $$\ldots \cup (-3\pi/2, -\pi/2) \cup (-\pi/2, \pi/2) \cup (\pi/2, 3\pi/2) \cup \ldots$$
"Locally constant" means any function that is constant on each of these intervals, but can have different values on different intervals.
Nearly everywhere in calculus where you learned something involving constants is actually about things that are locally constant
For example, since $\operatorname{sgn}(\cos(\theta))$ is locally constant, its antiderivatives are all of the form
$$ \operatorname{sgn}(\cos(\theta)) \theta + C(\theta) $$
where $C(\theta)$ is also locally constant. (note that we need a local constant of integration, not merely a constant of integration!)
Now, if we were so inclined, we can extend this formula to the domain of all $\theta$ by lining up all of the constants. The end result is that the antiderivative is a constant plus the sawtooth function depicted below:
Image produced by Wolfram alpha
As an example of seeing how this working, suppose our goal was to compute the integral
$$ \int_{-1}^1 \frac{\mathrm{d}x}{\sqrt{1 - x^2}} $$
While unusual, we can rewrite this as
$$ \int_{-\pi/2}^{5\pi/2} \operatorname{sgn}(\cos(\theta)) \mathrm{d} \theta $$
This isn't an invertible substitution, since each value of $x$ corresponds to three different values of $\theta$ (barring a few exceptions). But one-dimensional integration is very robust, and we should still expect to get the right answer if we have the details right.
And we do; if we take the sawtooth function above as the antiderivative, then the integral becomes
$$ \left( \frac{\pi}{2} \right) - \left( -\frac{\pi}{2} \right) = \pi $$
which is the correct answer — and the same answer we'd get by only integrating over $(-\pi/2, \pi/2)$.
Of course, if we aren't interested in the greater generality, we can just simplify by insisting that $\theta \in [-\pi/2, \pi/2]$ and simply take $\theta + C$ as the antiderivative, thus avoiding any hassles with the sign.<|endoftext|>
TITLE: Is every element on a set also a set?
QUESTION [7 upvotes]: I've been trying to understand in a more formal way what a set actually is, but I have some questions. According to the axiom of regularity for every non-empty set A there exists an element in the set that's disjoint from A. That would mean that such element is also a set, right?
I read here: Axiom of Regularity , that in axiomatic set theory everything is a set, I understand that natural numbers are constructed from the empty set, integers are constructed from the naturals, rationals from the integers, and reals from the rationals. I can see how every element in such sets is also a set. But, for example, in the set of all the letters of the alphabet, or the sample space of an experiment when the possible results are not numbers, or the set of my classmates; it's not clear to me how their elements are also sets. So, are they really sets? Is every element in a set also a set?
Thank you
REPLY [5 votes]: That depends on which axioms or system you're using. You could of course make up a system which allows (distinct) atomic objects (they're called ur-elements) that can be elements of sets, this is how ZF set theory originally started.
However in (modern) ZF set theory the axiom of extensionality basically prohibits anything that's not a set. Things are considered equal if they have the same elements and since anything not being a set does not have any elements they would be considered equal to the empty set.
As you pointed out one constructs the natural numbers and so on by constructing set. However one normally does not use those properties of them (being sets), you almost never see things like $1 \cup 2$ or $0 \in 1$. One should note that the way natural numbers are constructed is not standardized, that is there is different ways to achieve the same (standardized) properties of the numbers (which makes using numbers as they are sets being non-standard and have no universally accepted meaning).<|endoftext|>
TITLE: Show that $f_{\alpha}(t)$ is a p.d.f.
QUESTION [6 upvotes]: Let $\displaystyle \phi(t)=\frac{1}{\sqrt{2\pi}}e^{-t^2/2}$,$t\in \Bbb R$ be the standard normal density function and $\displaystyle \Phi(x)=\int_{-\infty}^x\phi(t)\,dt$ be the standard normal distribution function. Let $f_{\alpha}(t)=2\phi(t)\Phi(\alpha t)$,$t\in \Bbb R$
where $\alpha \in \Bbb R$. Show that $f_{\alpha}$ is a probability density function.
we have $\Phi'(x)=\phi(x)$. We have to show that $\displaystyle\int_{-\infty}^{\infty}f_{\alpha}(t)\,dt=1$. I tried by integration by-parts but I got the value is $0$. Dose there any other process or where is my mistake.?
Edit :
$\displaystyle \int_{-\infty}^{\infty} f_{\alpha}(t)dt= 2\int_{-\infty}^{\infty}\phi(t)\Phi(\alpha t)\,dt=2\left[\Phi(\alpha t)\int_{-\infty}^{\infty}\phi(t)\,dt\right]_{-\infty}^{\infty}-2\int_{-\infty}^{\infty}\left[\alpha\Phi'(\alpha t).\int_{-\infty}^{\infty}\phi(t)\,dt\right]\,dt=2[\Phi(\infty)-\Phi(-\infty)]-2\int_{-\infty}^{\infty}\alpha\phi(\alpha t)\,dt=\cdots=0$
REPLY [9 votes]: A probabilistic interpretation: consider $(X,Y)$ i.i.d. standard normal, then $\Phi(at)=P(X
TITLE: Is every $F_{\sigma\delta}$-set a set of points of convergence of a sequence of continuous functions?
QUESTION [7 upvotes]: It is well known that if $\langle f_n:n\in\mathbb{N}\rangle$ is a sequence of continuous functions, $f_n\colon\mathbb{R}\to\mathbb{R}$, then $\big\{x\in\mathbb{R}:\lim_{n\to\infty}f_n(x)\text{ exists}\big\}$ is an $F_{\sigma\delta}$-set (see this post). I am asking if the converse is true, i.e., whether for every $F_{\sigma\delta}$-set $E\subseteq\mathbb{R}$ there exists a sequence $\langle f_n:n\in\mathbb{N}\rangle$ of continuous functions, $f_n\colon\mathbb{R}\to\mathbb{R}$, such that $\big\{x\in\mathbb{R}:\lim_{n\to\infty}f_n(x)\text{ exists}\big\}=E$.
My attempt: I would try to prove it in two steps.
(1) Given an $F_{\sigma\delta}$-set $E$, find closed sets $E^k_n$, $n,k\in\mathbb{N}$, such that $E^k_n\supseteq E^l_n$ and $E^k_n\subseteq E^k_m$ for $k\le l$ and $n\le m$, and $E=\bigcap_k\bigcup_n E^k_n$.
(2) Given $E^k_n$ as above, find continuous functions $f_n\colon\mathbb{R}\to\mathbb{R}$ such that for every $x$, $x\in E^k_N$ iff $\left|f_n(x)-f_m(x)\right|\le 2^{-k}$ for all $m\ge n\ge N$.
(1) would be accomplished as follows. Let $E=\bigcap_k\bigcup_n F^k_n$, $F^k_n$ closed. Let $\langle G^0_n:n\in\mathbb{N}\rangle$ consists of all elements of $\langle F^0_n:n\in\mathbb{N}\rangle$, each repeating infinitely many times. Let $\langle G^1_n:n\in\mathbb{N}\rangle$ consists of all possible intersections $F^0_i\cap F^1_j$, $i,j\in\mathbb{N}$, each repeating infinitely many times and ordered so that $G^1_n\subseteq G^0_n$ for every $n$. Similarly, let $\langle G^k_n:n\in\mathbb{N}\rangle$ consists of all possible intersections $G^0_{i_0}\cap\cdots\cap G^k_{i_k}$, $i_0\dots,i_k\in\mathbb{N}$, each repeating infinitely many times and ordered so that $G^k_n\subseteq G^l_n$ for every $n$, whenever $l
TITLE: Is there any example of a sequentially-closed convex cone which is not closed?
QUESTION [5 upvotes]: I am interested in showing that a sequentially-closed convex cone is closed in order to prove a representation theorem for a pre-ordered preference relation. Thank you in advance!
REPLY [3 votes]: Consider the space $(\ell_\infty)^*$ endowed with the weak*-topology. The canonical image of $\ell_1$ in that space is sequentially closed by the Schur property of $\ell_1$, however it is also dense by Goldstine's theorem.<|endoftext|>
TITLE: Prove linear combinations of logarithms of primes over $\mathbb{Q}$ is independent
QUESTION [5 upvotes]: Suppose we have a set of primes $p_1,\dots,p_t$. Prove that $\log p_1,\dots,\log p_t$ is linear independent over $\mathbb{Q}$. Now, this implies $ \sum_{j=1}^{t}x_j\log(p_j)=0 \iff x_1=\dots=x_t=0$.
I think I have to use that fact that every $q\in\mathbb{Q}$ can be written as $\prod_{\mathcal{P}}$, where $n_p$ is a unique sequence ($n_2$,$n_3$,$\dots$) with domain $\mathbb{Z}$. Here, $\mathcal{P}$ denotes the set of all integers.
Now how can I use this to prove the linear independency?
REPLY [5 votes]: If $\sum_{j=1}^{t}x_j\log(p_j)=0$
then
$\sum_{j=1}^{t}y_j\log(p_j)=0$ where $y_j \in \Bbb Z$ is the product of $x_j$ by the common denominator of the $x_j$'s.
Therefore $\log\left(\prod_{j=1}^t p_j^{y_j}\right) = 0$,
which implies $\prod_{j=1}^t p_j^{y_j} = 1$, and this is only possible if $y_j=0$ for all $j$. Indeed, you have
$$ \prod\limits_{\substack{1 \leq j \leq t\\ y_j \geq 0}} p_j^{y_j} =
\prod\limits_{\substack{1 \leq i \leq t\\ y_i < 0}} p_i^{-y_i}$$
and uniqueness of prime powers decomposition implies $y_j=0$ for all $j$.
The converse is easy to see: if $x_j=0$ for all $j$, then $\sum_{j=1}^{t}x_j\log(p_j)=0$.<|endoftext|>
TITLE: Find minimal value of $abc$ if the quadratic equation $ax^2-bx+c = 0$ has two roots in $(0,1)$
QUESTION [9 upvotes]: If $$ ax^2-bx+c = 0 $$ has two distinct real roots in (0,1) where a, b, c are natural numbers then find the minimum value of product abc ?
REPLY [13 votes]: Since $a,b,c $ are positive the roots are trivially greater than 0.
What remains is to solve the inequality:
$\frac{b + \sqrt{b^2-4ac}}{2a} <1$
This reduces to $ a+c>b$
But the roots being real and distinct we have $b^2 >4ac$
Combining both we have :
$a^2 + c^2 + 2ac > b^2 > 4ac$
$b^2 > 4ac$ tells us $b> 2$ (why?)
$ a^2 + c^2+ 2ac > 4ac $ tells us $a \neq c$
Checking small cases we get $(a,b,c) =(5,5,1)$ where $abc =25$
EDIT:
Checking "small" cases is not informative, so adding an explanation:
Keeping in mind $a+c>b$, the minimum value of $ac$ occurs when $a=b$ and $c=1$. So for given $b$, the minim of $abc$ is $b^2$. The smallest value of $b$ which agrees the inequality $b^2>4b$ is 5 (as $ac=b$). Hence the corresponding minimum value is $5^2$<|endoftext|>
TITLE: Justify geometrically: an element and its inverse are not conjugate
QUESTION [6 upvotes]: Consider the group $G$ of rotations of regular tetrahedron in $\mathbb{R}^3$. We know that this group is $A_4$. We also know that a rotation of order $3$ and its inverse are not conjugate: ratation of order $3$ corresponds to a 3-cycle $(123)$ and in $A_4$ we know by algebraic arguments that $(123)$ and $(132)$ are not conjugate.
Q. Is there any geometric smart way to show that a rotation of order $3$ and its inverse are not conjugate in the group of rotational symmetries?
REPLY [3 votes]: You can visualize this as follows: The tetrahedron is orientable, that is you can draw on each face a circular arc such that where these orientations meet on the edges they annihilate each other. There are six rotations, two for each face : a positive one and a negative one. It is impossible for the action of the symmetry group to map a positive rotation into a negative one.<|endoftext|>
TITLE: Are closed ball convex in a translation surface?
QUESTION [8 upvotes]: Let $(X,\omega)$ be a translation surface and $x$ any point (smooth or not) in it. Let $r\in \mathbb{R}^+$ be such that it is smaller than the diameter of $(X,\omega)$. Is the closed ball $B_r(x)$ always convex?
My guess is no.
I tried to figure it out using a simple translation surfaces: the regular octahedron with sides identified (it has one point of conical singularity of total angle $6\pi$). Then if I'm not wrong I can find a smooth point $x$ and an $r>0$ such that the closed ball $B_r(x)$ "overlaps" about the singular point giving non convexity. In the figure below I've drawn the situation I mean: the ball $B_r(x)$ is the dark part of the octahedron and I drew two segments not entirely contained in it.
Are my guess and my construction right?
Thank you
REPLY [2 votes]: You are correct that the answer is no. Your example seems correct as well.
Perhaps the simplest example is an infinite circular cylinder of radius $r$ (and circumference $2\pi r$). If $p$ is any point on this cylinder, the ball of radius $\pi r$ centered at $p$ is "tangent" to itself on the back side, and is thus clearly not convex. The following picture shows this ball:
Indeed, balls on this cylinder are convex if and only if their radius is less than $\pi r/2$.
Of course, this example is non-compact, but basically the same geometry works on a flat torus of sufficient size.<|endoftext|>
TITLE: Compute $E(\sin X)$ if $X$ is normally distributed
QUESTION [7 upvotes]: If $X$ is normally distributed with mean $\mu$ and standard deviation $\sigma$ what is the expected value $E[\sin(x)]$?
I think this has something to do with the characteristic function...
REPLY [10 votes]: Let $X\sim \mathcal{N}(\mu,\sigma)$. Then, the characteristic function of $X$ is
$$t\mapsto\phi_{X}(t):=\Bbb E[\exp(itX)]=\exp\left(i\mu-\frac{\sigma^{2}t^{2}}{2}\right)$$
By linearity of the integral, we have, for any integrable complex-valued function $f$:
$$\mathfrak{Im}\int f=\int \mathfrak{Im} f \tag{1}$$
where $\mathfrak{Im}$ denotes the imaginary part of a complex number and is defined pointwise for a complex-valued function. Indeed, let $(\Omega,\mathcal{F},\nu)$ be a measure space and $f:\Omega\to\Bbb C$ a $\nu$-integrable function. Then, for any $\omega\in\Omega$, we can write:
$$f(\omega)=f_{1}(\omega)+if_{2}(\omega)$$
where $f_{1}$ and $f_{2}$ are real-valued function on $\Omega$. It is easy to see that $f_{1}$ and $f_{2}$ are integrable if $f$ is integrable (actually, if and only if). Therefore, we have:
$$\int_{\Omega} f\text{d}\nu=\int_{\Omega}f_{1}+if_{2}\text{d}\nu:=\int_{\Omega}f_{1}\text{d}\nu+i\int_{\Omega}f_{2}\text{d}\nu$$
$(1)$ follows obviously.
Hence, we have:
\begin{align*}
\Bbb E[\sin(X)]&=\,\Bbb E[\mathfrak{Im}\exp(iX)]\\
&=\mathfrak{Im}\,\Bbb E[\exp(iX)]\\
&=\mathfrak{Im}\,\Bbb \phi_{X}(1)\\
&=\mathfrak{Im}\exp\left(i\mu-\frac{\sigma^{2}}{2}\right)\\
&=\sin(\mu)\exp\left(-\frac{\sigma^{2}}{2}\right)
\end{align*}<|endoftext|>
TITLE: Is there any mathematical reason for this "digit-repetition-show"?
QUESTION [136 upvotes]: The number $$\sqrt{308642}$$ has a crazy decimal representation : $$555.5555777777773333333511111102222222719999970133335210666544640008\cdots $$
Is there any mathematical reason for so many repetitions of the digits ?
A long block containing only a single digit would be easier to understand. This could mean that there are extremely good rational approximations. But here we have many long one-digit-blocks , some consecutive, some interrupted by a few digits. I did not calculate the probability of such a "digit-repitition-show", but I think it is extremely small.
Does anyone have an explanation ?
REPLY [152 votes]: The architect's answer, while explaining the absolutely crucial fact that $$\sqrt{308642}\approx 5000/9=555.555\ldots,$$ didn't quite make it clear why we get several runs of repeating decimals. I try to shed additional light to that using a different tool.
I want to emphasize the role of the binomial series. In particular the Taylor expansion
$$
\sqrt{1+x}=1+\frac x2-\frac{x^2}8+\frac{x^3}{16}-\frac{5x^4}{128}+\frac{7x^5}{256}-\frac{21x^6}{1024}+\cdots
$$
If we plug in $x=2/(5000)^2=8\cdot10^{-8}$, we get
$$
M:=\sqrt{1+8\cdot10^{-8}}=1+4\cdot10^{-8}-8\cdot10^{-16}+32\cdot10^{-24}-160\cdot10^{-32}+\cdots.
$$
Therefore
$$
\begin{aligned}
\sqrt{308462}&=\frac{5000}9M=\frac{5000}9+\frac{20000}9\cdot10^{-8}-\frac{40000}9\cdot10^{-16}+\frac{160000}9\cdot10^{-24}+\cdots\\
&=\frac{5}9\cdot10^3+\frac29\cdot10^{-4}-\frac49\cdot10^{-12}+\frac{16}9\cdot10^{-20}+\cdots.
\end{aligned}
$$
This explains both the runs, their starting points, as well as the origin and location of those extra digits not part of any run. For example, the run of $5+2=7$s begins when the first two terms of the above series are "active". When the third term joins in, we need to subtract a $4$ and a run of $3$s ensues et cetera.<|endoftext|>
TITLE: Integral of a bivariate Gaussian in the positive quadrant
QUESTION [5 upvotes]: I am looking for a reference (or a somewhat simple proof) for the following result, which for instance Mathematica spits out without too much effort. Here $a,b,c \in \mathbb{R}$ are constants satisfying $a, c < 0$ and $b^2 < 4 a c$.
$$\int_0^{\infty}\int_0^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy = \frac{1}{2\sqrt{4ac-b^2}} \left(\pi + 2 \arctan\left(\frac{b}{\sqrt{4ac-b^2}}\right)\right).$$
One idea I had was to write:
\begin{align}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy &= 2 \int_0^{\infty}\int_0^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy \\
&+ 2 \int_0^{\infty}\int_0^{\infty} \exp(a x^2 - b x y + c y^2) \, dx \, dy
\end{align}
The integral on the left can easily be computed as it corresponds to the total probability of the corresponding bivariate Gaussian distribution (minus scaling factors), but then I'm stuck at solving the other integral with $(-b)$ substituted for $b$, which seems just as complicated to compute and is not easily relatable to the original integral. So I'm not sure if this approach leads anywhere.
A more direct approach would be to simply compute both integrals explicitly, one at a time. Doing one integral first, a human can verify without too much effort that
$$\int_0^{\infty} \exp(a x^2 + b x y + c y^2) \, dx = \sqrt{\frac{\pi}{-4 a}} \cdot \exp\left(\frac{(4 a c -b^2) y^2}{a}\right) \left(\text{erf}\left(\frac{b y}{2 \sqrt{-a}}\right)+1\right).$$
Doing the second integral with $\exp(u y^2) (\text{erf}(v y) + 1)$ is not so straightforward though, and for instance I could not find a reference for computing such integrals (and getting an arctangent in the process) in Abramowitz and Stegun's handbook. So if someone has a reference for integrating $\exp(u y^2) \text{erf}(v y)$ over $y > 0$ for $u, v$ as above, that would also be appreciated.
REPLY [5 votes]: Of course, just when I gave up and posted the question, I found a solution... It is based on substituting $y = x s$ (and $dy = x \, ds$) before computing the integral over $x$, so that it greatly simplifies and an integral over $1/(a + bs + cs^2)$ remains, which leads to the arctangent solution.
\begin{align}
\int_{y=0}^{\infty}\int_{0}^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy &= \int_{s=0}^{\infty} \left(\int_{0}^{\infty} \exp\left((a + bs + cs^2)x^2 \right) \, x \, dx\right) \, ds \\
&= \int_{0}^{\infty} \left[\frac{\exp\left((a + bs + cs^2) x^2\right)}{2 (a + bs + cs^2)}\right]_{x=0}^{\infty} \, ds \\
&= \int_{0}^{\infty} \left[0 - \frac{1}{2 (a + bs + cs^2)}\right] \, ds \\
&= \frac{-1}{2} \int_{0}^{\infty} \frac{1}{a + bs + cs^2} \, ds \\
&= \frac{-1}{2} \left[\frac{2}{\sqrt{4 a c - b^2}} \, \arctan \left(\frac{2 a s + b}{\sqrt{4 a c - b^2}}\right)\right]_{s=0}^{\infty} \\
&= \frac{-1}{\sqrt{4 a c - b^2}} \left[-\frac{\pi}{2} - \arctan \left(\frac{b}{\sqrt{4 a c - b^2}}\right)\right].
\end{align}
Here we used the provided conditions on $a, b, c$ several times throughout. For instance the third equality uses that $a + bs + cs^2 < 0$ for all $s > 0$.<|endoftext|>
TITLE: Discriminant of a cyclotomic field
QUESTION [7 upvotes]: If $\zeta$ is a primitive $n$-th root of unity, prove that:
$$d(1, \zeta,...,\zeta^{\varphi(n)-1})=(-1)^{\varphi(n)/2}n^{\varphi(n)}\prod_{p\mid n} p^{-\frac{\varphi(n)}{p-1}}$$
Let $n=\prod_{i=1}^{m}p_i^{e_i}$. After looking it up in some books, I was able to understand why this is true for $m=1$. However, they all ignored the general case $m>1$ or simply stated that it could be done by induction on $m$, but I really can't see how it could be done.
The only interesting thing I could find out was that for $n,m$ with $\gcd(n, m)=1$, we get, on the right hand side of the equation:
$$(-1)^{\varphi(nm)/2}nm^{\varphi(nm)}\prod_{p\mid nm} p^{-\frac{\varphi(nm)}{p-1}}=$$
$$\left((-1)^{\varphi(n)/2}n^{\varphi(n)}\prod_{p\mid n} p^{-\frac{\varphi(n)}{p-1}}\right)^{\varphi(m)}\left((-1)^{\varphi(m)/2}n^{\varphi(m)}\prod_{p\mid m} p^{-\frac{\varphi(m)}{p-1}}\right)^{\varphi(n)}$$
That makes me think I'm getting somewhere, but I'm stuck with the problem of showing that $d(1, \zeta,...,\zeta^{\varphi(nm)-1})=[d(1, \zeta,...,\zeta^{\varphi(n)-1})]^{\varphi(m)}[d(1, \zeta,...,\zeta^{\varphi(m)-1})]^{\varphi(n)}$, which doesn't seem trivial at all. Any ideas? Thanks!
REPLY [5 votes]: because we have a proposition that says
If $K, F$ are two number fields linearly disjoint over $\mathbb{Q}$,
$KF$ their compositum, and their discriminants are coprime. then
$$\delta_{KL}=\delta_{K}^{[L:\mathbb{Q}]}\cdot\delta_{L}^{[K:\mathbb{Q}]}$$
and in our case we have $\mathbb{Q}(\zeta_{n})$ and $\mathbb{Q}(\zeta_{m})$ are linearly disjoint because $ gcd(n,m)=1$ , and their discriminants are coprime
then
$$\delta_{\mathbb{Q}(\zeta_{mn})}=\delta_{\mathbb{Q}(\zeta_{n})}^{\phi(m)}\cdot\delta_{\mathbb{Q}(\zeta_{m})}^{\phi(n)}$$.
But the problem is $$\bigg( (-1)^{\phi(n)/2}n^{\phi(n)}\prod_{p\mid n}p^{\frac{-\phi(n)}{p-1}}\bigg)^{\phi(m)}\cdot\bigg( (-1)^{\phi(m)/2}n^{\phi(m)}\prod_{p\mid m}p^{\frac{-\phi(m)}{p-1}}\bigg)^{\phi(n)}=(-1)^{\phi(nm)}(nm)^{\phi(nm)}\prod_{p\mid nm}p^{\frac{-\phi(nm)}{p-1}}$$<|endoftext|>
TITLE: A smooth function can not be transformed into another smooth function without changing the value of every open interval.
QUESTION [5 upvotes]: Take any $C^\infty$ (smooth) function $f: R \to R$. For any arbitrary function $t:R\to R$, define $g :R\to R$ as $g(x)= (t\circ f)(x)$
Conjecture: For any such $g$, if $g$ is smooth ($g\in C^\infty$), the following must necessarily hold:
$(i)$: Either: $s(x) = x$ (identity function), or
$(ii)$: There exists no open ($O_R$) interval $U$ on the domain of $f, g$, for which holds: $f(U)=g(U)$. i.e.: $$\forall U\in O_R:\exists x\in U:f(x)\neq g(x)$$
In plain English: A smooth function cannot be transformed into another smooth function, without changing the values in all its intervals: Only isolated points may remain unchanged.
Here is an incomplete argument why it seems to me must be true:
Assume we have a an arbitrary smooth function $f$, and an arbitrary function $s$, and $g=s\circ f$. Assume that $s$ is not the identity function (contradicting condition $i$), and that for some interval $(a,b)$, $f(x)=g(x)$ for all $x\in (a,b)$, (contradicting condition $ii$). Take $b$ here to be the largest $b$, such that this holds (which is possible by the Completeness Axiom on $R$).
Now denote by $f_n, g_n$ the $n$th derivative of $f, g$ respectively. Since by assumption, $f$ is smooth on $b$, we know that
$$(1): \underset{\delta \to 0^-}{\text{Lim}}\left(\frac{f_{n-1}(b+\delta)-f_{n-1}(b)}{\delta}\right)=:L_{f_n}^-=L_{f_n}^+:=\underset{\delta \to 0^+}{\text{Lim}}\left(\frac{f_{n-1}(b+\delta)-f_{n-1}(b)}{\delta}\right)$$
($L$ will denote the limit with respect to the point $b$).
$(2):$ Since $f$ and $g$ are identical on $(a,b)$, we also know that $L_{f_n}^-=L_{g_n}^-$, for all $n\in \mathbb N$.
$(3):$ Now assume (in order to derive a contradiction) that $g$ is smooth on $b$, so that $L_{g_n}^-=L_{g_n}^+$ for all $n \in \mathbb N$. Then using $(1,2)$ it also holds that $L_{f_n}^+=L_{g_n}^+$ for all $n \in \mathbb N$.
However, since $b$ is the largest value such that $f(x)=g(x)$ on $(a,b)$, that means that either $f(b)\neq g(b)$ (in which case $g$ is discontinuous and not smooth, completing the proof for that case), or for some $c>b$, it is the case that $f(x)\neq g(x)$ for all $x\in (b,c)$.
Now here comes a bit of a leap: Given that $f(x)\neq g(x)$ for all $x\in (b,c)$, we also know that there is an interval $(b,\beta _1)$, where $\beta_1\leq c$, in which for all $x$: $f_1(x)\neq g_1(x)$. Similarly, given interval $(b, \beta_i)$ in which for all $x: f_i(x)\neq g_i(x)$, there is an interval $(b, \beta_{i+1})$, where $\beta_{i+1}\leq \beta_i$, in which for all $x: f_{i+1}(x)\neq g_{i+1}(x)$
Again a leap:
Hence we know that for any $n\in \mathbb N$, there is a $\beta \in \mathbb N$, such that for all $x\in (b, \beta), f_{n}(x)\neq g_{n}(x)$. Hence there exists an $n\in \mathbb N$, such that $L_{g_n}^+ \neq L_{f_n}^+$. This contradicts $(3)$, therefore, $g$ is not smooth.
Discussion:
Is this conjecture correct?
Is the first part of the proof correct?
Is there a way to fill in the "leaps" at the end?
Are there better ways to prove it (or if the conjecture is false, to restate it into a correct one)?
ps. note, I have no formal maths training, and I came up with this conjecture myself based on intuition, so if this is a stupid conjecture or proof, understand that.
REPLY [3 votes]: Counterexample: Define
$$f(x) = \begin{cases} 0 & x\le 0\\e^{-1/x} & x>0\end{cases}$$
Then $f\in C^\infty(\mathbb R).$ With $t(x) = x^2,$ we have $f$ and $ t\circ f$ equal to $0$ on $(-\infty,0].$<|endoftext|>
TITLE: Half iteration of exponential function
QUESTION [8 upvotes]: I'm working on the half iteration of the exponential function. No one has any idea what fractional iterations could mean but I think intuitively it should be a function $f(x)$ such that $f(f(x))=e^x$.
Here's how I'm finding $f(x)$ when $x\approx 0$:
If $x\approx 0$, then, we have,
$$e^x\approx 1+x+\frac{x^2}{2}$$. ...(1)
Now, if we assume the required function $f(x)$ to be of the form $ax^2+bx+c$, then $$f(f(x))= a^3x^4+2a^2bx^3+(2a^2c+ab^2+ab)x^2+(2abc+b^2)x+ac^2+bc+c$$
But, since $x\approx 0$ therefore,
$$f(f(x))=e^x\approx ac^2+bc+c+(2abc+b^2)x+(2a^2c+ab^2+ab)x^2$$. ....(2)
Comparing coefficients of like powers of $x$ in equation (1) and (2), we get,
$$ac^2+bc+c=1 \tag {3.1}$$
$$2abc+b^2=1 \tag {3.2}$$
$$2a^2c+ab^2+ab=\frac{1}{2} \tag {3.3}$$
The problem is solving these equations. I've tried substitution but they get reduced to a polynomial of very high degree which I don't know how to solve. Is there some way to solve these to get $a$, $b$, and $c$ and hence get the required half iteration function of $e^x$ as $ax^2+bx+c$? Please tell me how to solve these three equations.
REPLY [2 votes]: Looking at the equations $$ac^2+bc+c=1 \tag 1$$ $$2abc+b^2=1 \tag 2$$ $$2a^2c+ab^2+ab=\frac{1}{2} \tag 3$$ we can eliminate $b$ from $(1)$ $$b=\frac{1-c-a c^2}{c}$$ Replacing in $(2)$ and solving for $a$ leads to $$a=\frac{\sqrt{1-2 c}}{c^2}$$ Replacing in $(3)$ leads to $$-c \left(c^3+12 c+6 \sqrt{1-2 c}-14\right)+4 \sqrt{1-2 c}=4$$ After squaring steps, this reduces to $$c^7+24c^5-28c^4+152c^3-264c^2+160c-32=0$$ which has only one real root close to $c=\frac 12$.
Using Newton method for finding the zero of the equation in $c^7$ leads to $$a=0.261795456735753$$ $$b=0.878112905194437$$ $$c=0.497894079064888$$ as already given in Gottfried Helms's answer. These numbers can be rationalized as $$a=\frac{37409}{142894}\qquad b=\frac{77821}{88623}\qquad c=\frac{18323}{36801}$$
Edit
Back to the problem eighteen months later, we could get expressions for the solution using $[1,n]$ Padé approximants for the septic equation (built around $c=\frac 12$). For different values of $n$, this would give
$$a_1=\frac{146336 \sqrt{352121}}{331786225}\qquad b_1=\frac{18369-4 \sqrt{352121}}{18215}\qquad c_1=\frac{18215}{36584}$$
$$a_2=\frac{3257213 \sqrt{44685705147}}{2630063332009}\qquad b_2=\frac{1635466-\sqrt{44685705147}}{1621747}\qquad c_2=\frac{1621747}{3257213}$$<|endoftext|>
TITLE: Denoting all the cube roots of a real number
QUESTION [9 upvotes]: This may be a very simple question to ask, but I am confused with these definitions and would like to clarify here.
$\sqrt{81} = 9$. But $\sqrt{81} \ne -9$ because $\sqrt{}$ is used to represent the principal root.
So, If I want to represent both the roots, I have to mention it as $\pm\sqrt{81} = \pm 9$.
We know every real number ($\ne 0$) has three cube roots, one real and two complex. So, if we say $\root 3 \of {27}$, it means the principal cube root, which is $3$.
If so,
(a) How do we indicate that we are referring to all the three cube roots
together (like $\pm \sqrt{81}$ for square roots)
because $\root 3 \of {}$ refers to only principal cube root?
(b) Why are there no commonly accepted guidelines to decide the principal
cube root because in some places, it is the real number while some
books refer it to the one in positive imaginary axis.
Please help me to clear my doubts.
REPLY [2 votes]: When using the cubic formula (like the quadratic formula, except for cubics), the first root is found using the principal cube root, and in this case it is the one with the LARGES REAL PART, if it is a tie, you chose the one with the larges imaginary that is in the tie. The second root is the one with the largest imaginary part that is not the principle cube root. The third is the other. If you do not do this, you will not get the right answer in the cubic formula.<|endoftext|>
TITLE: Why does the gradient commute with taking expectation?
QUESTION [6 upvotes]: Let $X, Y$ be two random variables, with $X$ taking values in $\Bbb R^n$ and $Y$ taking values in $\Bbb R$.
Then we can look at the function $h: \Bbb R^n \to \Bbb R$ given by $$\beta \mapsto \Bbb E[(Y-X^T\beta)^2]$$ It is claimed that the gradient of $h$ is given by $$\nabla h = \Bbb E[2X(X^T\beta-Y)]$$
This seems like a special case of the identity
$$\nabla \Bbb E[f]=\Bbb E [\nabla f]$$
Where the expectation is taken over the mutual distribution of some random variables.
Formally, We want the following: Suppose $X_1,...,X_m$ are random variables returning values in some sets $A_i$ with some given mutual probability distribution. Then for every function $f: \Bbb R^n \times \prod A_i \to \Bbb R$, for every $\beta \in \Bbb R^n$ we can form the random variable $f(\beta, X_1,...,X_m)$ and take its expectation. Taking different values of $\beta$ gives rise to a function $\Bbb R^n \to \Bbb R$. We claim that its gradient is equal to the vector obtained by first fixing the values of $X_1,...,X_m$ and taking the gradient of the resulting function $\Bbb R^n \to \Bbb R$, and this gives a random variable returning values in $\Bbb R^n$, for which we can take the expectation.
REPLY [2 votes]: $\beta$ is not a random variable so you can expand the expression and take $\beta$ out as a factor. Then differentiate that expression with respect to $\beta$ and check that the claimed equality holds. It is not more complicated than that. It would be a problem if you couldn't factor $\beta$ out (e.g. $e^{X^{\top}\beta}$). Then you would indeed have to justify interchanging integration and differentiation.<|endoftext|>
TITLE: Two interesting results in integration $\int_{0}^{a}f(a-x) \ \mathrm{d}x= \int_{0}^{a}f(x)\ \mathrm{d}x$ and differentiation of powers of functions
QUESTION [6 upvotes]: I am investigating the following result in integration
$\displaystyle\int_{0}^{a}f(a-x) \ \mathrm{d}x = \int_{0}^{a}f(x) \ \mathrm{d}x \ \ \ (*)$
This neat little result forms the basis for many questions in calculus exams, often then asking one to evaluate something like
$\displaystyle\int_{0}^{\frac{\pi}{2}}\frac{\sin^n x}{\sin^n x + \cos^n x} \ \mathrm{d}x$ where $n$ is a positive integer. The process of solving this integral isn't too challenging, and is almost immediate from $(*)$.
My question is this: can anyone think of any more challenging integrals out there (possibly requiring some clever substitution, integration by parts etc.) that $(*)$ can help solve?
UPDATE
I also came across another identity involving differentiation:
$\displaystyle \frac{\mathrm{d}}{\mathrm{d}x}(u(x))^{v(x)} = (u(x))^{v(x)}\left(\frac{\mathrm{d}v(x)}{\mathrm{d}x}\ln u(x) + \frac{v(x)}{u(x)}\frac{\mathrm{d}u(x)}{\mathrm{d}x}\right)$.
This is another identity that can be used to solve integrals, but I am again unable to find any creative examples, so if anyone could suggest some I'd be happy to give them a go.
REPLY [2 votes]: There are a lot of possible answers. For example, $$\int_{0}^{1} \frac{x^3}{3x^2-3x+1} \mathrm{d} x=\int_{0}^{1} \frac{x^3}{x^3+(1-x)^3} \mathrm{d} x=\frac{1}{2}$$ or $$\int_{0}^{1}\frac{x^5}{5x^4-10x^3+10x^2-5x+1}\mathrm{d}x=\frac{1}{2}$$ are both good examples of how this property can be used. We can use this property to calculate these complicated looking integrals in less than a few seconds.
If we were not to use this property, we would have to use things like $$\int \frac{x^5}{5x^4-10x^3+10x^2-5x+1}\mathrm{d} x$$
Which ends up being more than slightly complicated, as can be seen here. .
In general, we have the property
$$\int_{0}^{1} \frac{x^{2n+1}}{\sum_{k=1}^{2n+1}\binom{2n+1}{k} (-x)^{2n+1-k}}\mathrm{d}x=\frac{1}{2}$$