Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
English
Size:
100K - 1M
License:
zhangir-azerbayev
commited on
Commit
•
c0ecdaa
1
Parent(s):
4365a98
fetch wiki doesnt work, need to do this right
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- fetch_wiki.py +48 -0
- wiki/wikipedia/0.txt +398 -0
- wiki/wikipedia/1.txt +37 -0
- wiki/wikipedia/10.txt +15 -0
- wiki/wikipedia/100.txt +43 -0
- wiki/wikipedia/1000.txt +450 -0
- wiki/wikipedia/1001.txt +15 -0
- wiki/wikipedia/1002.txt +22 -0
- wiki/wikipedia/1003.txt +5 -0
- wiki/wikipedia/1004.txt +5 -0
- wiki/wikipedia/1005.txt +11 -0
- wiki/wikipedia/1006.txt +94 -0
- wiki/wikipedia/1007.txt +47 -0
- wiki/wikipedia/1008.txt +5 -0
- wiki/wikipedia/1009.txt +37 -0
- wiki/wikipedia/101.txt +7 -0
- wiki/wikipedia/1010.txt +107 -0
- wiki/wikipedia/1011.txt +39 -0
- wiki/wikipedia/1012.txt +1 -0
- wiki/wikipedia/1013.txt +23 -0
- wiki/wikipedia/1014.txt +101 -0
- wiki/wikipedia/1015.txt +259 -0
- wiki/wikipedia/1016.txt +23 -0
- wiki/wikipedia/1017.txt +15 -0
- wiki/wikipedia/1018.txt +11 -0
- wiki/wikipedia/1019.txt +33 -0
- wiki/wikipedia/102.txt +35 -0
- wiki/wikipedia/1020.txt +15 -0
- wiki/wikipedia/1021.txt +172 -0
- wiki/wikipedia/1022.txt +261 -0
- wiki/wikipedia/1023.txt +42 -0
- wiki/wikipedia/1024.txt +71 -0
- wiki/wikipedia/1025.txt +69 -0
- wiki/wikipedia/1026.txt +43 -0
- wiki/wikipedia/1027.txt +57 -0
- wiki/wikipedia/1028.txt +99 -0
- wiki/wikipedia/1029.txt +15 -0
- wiki/wikipedia/103.txt +29 -0
- wiki/wikipedia/1030.txt +11 -0
- wiki/wikipedia/1031.txt +7 -0
- wiki/wikipedia/1032.txt +3 -0
- wiki/wikipedia/1033.txt +59 -0
- wiki/wikipedia/1034.txt +5 -0
- wiki/wikipedia/1035.txt +9 -0
- wiki/wikipedia/1036.txt +166 -0
- wiki/wikipedia/1037.txt +305 -0
- wiki/wikipedia/1038.txt +31 -0
- wiki/wikipedia/1039.txt +63 -0
- wiki/wikipedia/104.txt +9 -0
- wiki/wikipedia/1040.txt +111 -0
fetch_wiki.py
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from bs4 import BeautifulSoup as bs
|
2 |
+
import wikipediaapi
|
3 |
+
import sys
|
4 |
+
import re
|
5 |
+
import pypandoc
|
6 |
+
|
7 |
+
def page_titles_of_category(cat_page):
|
8 |
+
"""
|
9 |
+
recursively
|
10 |
+
"""
|
11 |
+
titles = []
|
12 |
+
for member in cat_page.categorymembers.values():
|
13 |
+
if member.ns == wikipediaapi.Namespace.MAIN:
|
14 |
+
titles.append(member.title)
|
15 |
+
elif member.ns == wikipediaapi.Namespace.CATEGORY:
|
16 |
+
titles += page_titles_of_category(member)
|
17 |
+
return titles
|
18 |
+
|
19 |
+
def wikipedia():
|
20 |
+
wiki = wikipediaapi.Wikipedia('en')
|
21 |
+
wiki_html = wikipediaapi.Wikipedia(language='en',
|
22 |
+
extract_format=wikipediaapi.ExtractFormat.HTML)
|
23 |
+
|
24 |
+
"""
|
25 |
+
init_categories = [
|
26 |
+
#"Category:Mathematical_theorems",
|
27 |
+
"Category:Mathematical_proofs",
|
28 |
+
#"Category:Mathematical_examples",
|
29 |
+
#"Category:Mathematical_problems",
|
30 |
+
#"Category:Mathematical_terminology",
|
31 |
+
]
|
32 |
+
|
33 |
+
title_set = set()
|
34 |
+
for cat_name in init_categories:
|
35 |
+
print(cat_name + "...")
|
36 |
+
title_set = title_set.union(page_titles_of_category(wiki.page(cat_name)))
|
37 |
+
|
38 |
+
for title in title_set:
|
39 |
+
"""
|
40 |
+
|
41 |
+
p_html = wiki_html.page('Division by zero').text
|
42 |
+
|
43 |
+
pd_obj = pypandoc.convert_text(p_html, "latex", format="html")
|
44 |
+
print(pd_obj)
|
45 |
+
sys.exit()
|
46 |
+
if __name__=="__main__":
|
47 |
+
wikipedia()
|
48 |
+
|
wiki/wikipedia/0.txt
ADDED
@@ -0,0 +1,398 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The Basel problem is a problem in mathematical analysis with relevance to number theory, first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up years later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.
|
2 |
+
|
3 |
+
The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series:
|
4 |
+
$$
|
5 |
+
\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots.
|
6 |
+
$$
|
7 |
+
|
8 |
+
The sum of the series is approximately equal to 1.644934. The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be pi<sup>2</sup>/6 and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced a truly rigorous proof in 1741.
|
9 |
+
|
10 |
+
The solution to this problem can be used to estimate the probability that two large random numbers are relatively prime. Two random integers in the range from 1 to n, in the limit as n goes to infinity, are relatively prime with a probability that approaches 6/pi<sup>2</sup>, the inverse of the solution to the Basel problem.
|
11 |
+
|
12 |
+
Euler's original derivation of the value pi<sup>2</sup>/6 essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series.
|
13 |
+
|
14 |
+
Of course, Euler's original reasoning requires justification (100 years later, Karl Weierstrass proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community.
|
15 |
+
|
16 |
+
To follow Euler's argument, recall the Taylor series expansion of the sine function
|
17 |
+
$$
|
18 |
+
\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots
|
19 |
+
$$
|
20 |
+
|
21 |
+
Dividing through by x, we have
|
22 |
+
$$
|
23 |
+
\frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots
|
24 |
+
$$
|
25 |
+
|
26 |
+
Using the Weierstrass factorization theorem, it can also be shown that the right-hand side is the product of linear factors given by its roots, just as we do for finite polynomials (which Euler assumed as a heuristic for expanding an infinite degree polynomial in terms of its roots, but in fact is not always true for general $P(x)$):
|
27 |
+
|
28 |
+
<math>\begin{align}
|
29 |
+
|
30 |
+
\frac{\sin x}{x} &= \left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{2\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{3\pi}\right)\left(1 + \frac{x}{3\pi}\right) \cdots \\
|
31 |
+
|
32 |
+
&= \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots
|
33 |
+
|
34 |
+
\end{align}</math>
|
35 |
+
|
36 |
+
If we formally multiply out this product and collect all the x<sup>2</sup> terms (we are allowed to do so because of Newton's identities), we see by induction that the x<sup>2</sup> coefficient of sin x/x is
|
37 |
+
$$
|
38 |
+
-\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.
|
39 |
+
$$
|
40 |
+
|
41 |
+
But from the original infinite series expansion of sin x/x, the coefficient of x<sup>2</sup> is −1/3! = −1/6. These two coefficients must be equal; thus,
|
42 |
+
$$
|
43 |
+
-\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.
|
44 |
+
$$
|
45 |
+
|
46 |
+
Multiplying both sides of this equation by −pi<sup>2</sup> gives the sum of the reciprocals of the positive square integers.
|
47 |
+
$$
|
48 |
+
\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}.
|
49 |
+
$$
|
50 |
+
|
51 |
+
This method of calculating $\zeta(2)$ is detailed in expository fashion most notably in Havil's Gamma book which details many zeta function and logarithm-related series and integrals, as well as a historical perspective, related to the Euler gamma constant.
|
52 |
+
|
53 |
+
Using formulae obtained from elementary symmetric polynomials, this same approach can be used to enumerate formulae for the even-indexed even zeta constants which have the following known formula expanded by the Bernoulli numbers:
|
54 |
+
$$
|
55 |
+
\zeta(2n) = \frac{(-1)^{n-1} (2\pi)^{2n}}{2 \cdot (2n)!} B_{2n}.
|
56 |
+
$$
|
57 |
+
|
58 |
+
For example, let the partial product for $\sin(x)$ expanded as above be defined by $\frac{S_n(x)}{x} := \prod\limits_{k=1}^n \left(1 - \frac{x^2}{k^2 \cdot \pi^2}\right)$. Then using known formulas for elementary symmetric polynomials (a.k.a., Newton's formulas expanded in terms of power sum identities), we can see (for example) that
|
59 |
+
|
60 |
+
<math>
|
61 |
+
|
62 |
+
\begin{align}
|
63 |
+
|
64 |
+
\left[x^4\right] \frac{S_n(x)}{x} & = \frac{1}{2\pi^4}\left(\left(H_n^{(2)}\right)^2 - H_n^{(4)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{2}\left(\zeta(2)^2-\zeta(4)\right) \\
|
65 |
+
|
66 |
+
& \qquad \implies \zeta(4) = \frac{\pi^4}{90} = -2\pi^2 \cdot [x^4] \frac{\sin(x)}{x} +\frac{\pi^4}{36} \\
|
67 |
+
|
68 |
+
\left[x^6\right] \frac{S_n(x)}{x} & = -\frac{1}{6\pi^6}\left(\left(H_n^{(2)}\right)^3 - 2H_n^{(2)} H_n^{(4)} + 2H_n^{(6)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{6}\left(\zeta(2)^3-3\zeta(2)\zeta(4) + 2\zeta(6)\right) \\
|
69 |
+
|
70 |
+
& \qquad \implies \zeta(6) = \frac{\pi^6}{945} = -3 \cdot \pi^6 [x^6] \frac{\sin(x)}{x} - \frac{2}{3} \frac{\pi^2}{6} \frac{\pi^4}{90} + \frac{\pi^6}{216},
|
71 |
+
|
72 |
+
\end{align}
|
73 |
+
|
74 |
+
</math>
|
75 |
+
|
76 |
+
and so on for subsequent coefficients of $[x^{2k}] \frac{S_n(x)}{x}$. There are other forms of Newton's identities expressing the (finite) power sums $H_n^{(2k)}$ in terms of the elementary symmetric polynomials, $e_i \equiv e_i\left(-\frac{\pi^2}{1^2}, -\frac{\pi^2}{2^2}, -\frac{\pi^2}{3^2}, -\frac{\pi^2}{4^2}, \cdots\right), $ but we can go a more direct route to expressing non-recursive formulas for $\zeta(2k)$ using the method of elementary symmetric polynomials. Namely, we have a recurrence relation between the elementary symmetric polynomials and the power sum polynomials given as on this page by
|
77 |
+
$$
|
78 |
+
(-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n),
|
79 |
+
$$
|
80 |
+
|
81 |
+
which in our situation equates to the limiting recurrence relation (or generating function convolution, or product) expanded as
|
82 |
+
$$
|
83 |
+
\frac{\pi^{2k}}{2}\cdot \frac{(2k) \cdot (-1)^k}{(2k+1)!} = -[x^{2k}] \frac{\sin(\pi x)}{\pi x} \times \sum_{i \geq 1} \zeta(2i) x^i.
|
84 |
+
$$
|
85 |
+
|
86 |
+
Then by differentiation and rearrangement of the terms in the previous equation, we obtain that
|
87 |
+
$$
|
88 |
+
\zeta(2k) = [x^{2k}]\frac{1}{2}\left(1-\pi x\cot(\pi x)\right).
|
89 |
+
$$
|
90 |
+
|
91 |
+
By the above results, we can conclude that $\zeta(2k)$ is always a rational multiple of $\pi^{2k}$. In particular, since $\pi$ and integer powers of it are transcendental, we can conclude at this point that $\zeta(2k)$ is irrational, and more precisely, transcendental for all $k \geq 1$. By contrast, the properties of the odd-indexed zeta constants, including Apéry's constant $\zeta(3)$, are almost completely unknown.
|
92 |
+
|
93 |
+
The Riemann zeta function ζ(s) is one of the most significant functions in mathematics because of its relationship to the distribution of the prime numbers. The zeta function is defined for any complex number s with real part greater than 1 by the following formula:
|
94 |
+
$$
|
95 |
+
\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}.
|
96 |
+
$$
|
97 |
+
|
98 |
+
Taking s = 2, we see that ζ(2) is equal to the sum of the reciprocals of the squares of all positive integers:
|
99 |
+
|
100 |
+
<math>\zeta(2) = \sum_{n=1}^\infty \frac{1}{n^2}
|
101 |
+
|
102 |
+
= \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots = \frac{\pi^2}{6} \approx 1.644934.</math>
|
103 |
+
|
104 |
+
Convergence can be proven by the integral test, or by the following inequality:
|
105 |
+
|
106 |
+
<math>\begin{align}
|
107 |
+
|
108 |
+
\sum_{n=1}^N \frac{1}{n^2} & < 1 + \sum_{n=2}^N \frac{1}{n(n-1)} \\
|
109 |
+
|
110 |
+
& = 1 + \sum_{n=2}^N \left( \frac{1}{n-1} - \frac{1}{n} \right) \\
|
111 |
+
|
112 |
+
& = 1 + 1 - \frac{1}{N} {\stackrel{N \to \infty}{\longrightarrow}} 2.
|
113 |
+
|
114 |
+
\end{align}</math>
|
115 |
+
|
116 |
+
This gives us the upper bound 2, and because the infinite sum contains no negative terms, it must converge to a value strictly between 0 and 2. It can be shown that ζ(s) has a simple expression in terms of the Bernoulli numbers whenever s is a positive even integer. With s = 2n:
|
117 |
+
$$
|
118 |
+
\zeta(2n) = \frac{(2\pi)^{2n}(-1)^{n+1}B_{2n}}{2\cdot(2n)!}.
|
119 |
+
$$
|
120 |
+
|
121 |
+
The normalized sinc function $\text{sinc}(x)=\frac{\sin (\pi x)}{\pi x}$ has a Weierstrass factorization representation as an infinite product:
|
122 |
+
$$
|
123 |
+
\frac{\sin (\pi x)}{\pi x} = \prod_{n=1}^\infty \left(1-\frac{x^2}{n^2}\right).
|
124 |
+
$$
|
125 |
+
|
126 |
+
The infinite product is analytic, so taking the natural logarithm of both sides and differentiating yields
|
127 |
+
$$
|
128 |
+
\frac{\pi \cos (\pi x)}{\sin (\pi x)}-\frac{1}{x}=-\sum_{n=1}^\infty \frac{2x}{n^2-x^2}.
|
129 |
+
$$
|
130 |
+
|
131 |
+
After dividing the equation by $2x$ and regrouping one gets
|
132 |
+
$$
|
133 |
+
\frac{1}{2x^2}-\frac{\pi \cot (\pi x)}{2x}=\sum_{n=1}^\infty \frac{1}{n^2-x^2}.
|
134 |
+
$$
|
135 |
+
|
136 |
+
We make a change of variables ($x=-it$):
|
137 |
+
$$
|
138 |
+
-\frac{1}{2t^2}+\frac{\pi \cot (-\pi it)}{2it}=\sum_{n=1}^\infty \frac{1}{n^2+t^2}.
|
139 |
+
$$
|
140 |
+
|
141 |
+
Euler's formula can be used to deduce that
|
142 |
+
$$
|
143 |
+
\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2it}\frac{i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}=\frac{\pi}{2t}+\frac{\pi}{t\left(e^{2\pi t} - 1\right)}.
|
144 |
+
$$
|
145 |
+
|
146 |
+
or using hyperbolic function:
|
147 |
+
$$
|
148 |
+
\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2t}{i\cot (\pi i t)}=\frac{\pi}{2t}\coth(\pi t).
|
149 |
+
$$
|
150 |
+
|
151 |
+
Then
|
152 |
+
$$
|
153 |
+
\sum_{n=1}^\infty \frac{1}{n^2+t^2}=\frac{\pi \left(te^{2\pi t}+t\right)-e^{2\pi t}+1}{2\left(t^2 e^{2\pi t}-t^2\right)}=-\frac{1}{2t^2} + \frac{\pi}{2t} \coth(\pi t).
|
154 |
+
$$
|
155 |
+
|
156 |
+
Now we take the limit as $t$ approaches zero and use L'Hôpital's rule thrice:
|
157 |
+
$$
|
158 |
+
\sum_{n=1}^\infty \frac{1}{n^2}=\lim_{t\to 0}\frac{\pi}{4}\frac{2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^2 e^{2\pi t} + te^{2\pi t}-t}
|
159 |
+
$$
|
160 |
+
$$
|
161 |
+
\sum_{n=1}^\infty \frac{1}{n^2}=\lim_{t\to 0}\frac{\pi^3 te^{2\pi t}}{2\pi \left(\pi t^2 e^{2\pi t}+2te^{2\pi t} \right)+e^{2\pi t}-1}
|
162 |
+
$$
|
163 |
+
$$
|
164 |
+
\sum_{n=1}^\infty \frac{1}{n^2}=\lim_{t\to 0}\frac{\pi^2 (2\pi t+1)}{4\pi^2 t^2+12\pi t+6}
|
165 |
+
$$
|
166 |
+
$$
|
167 |
+
\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}.
|
168 |
+
$$
|
169 |
+
|
170 |
+
Use Parseval's identity (applied to the function f(x) = x) to obtain
|
171 |
+
$$
|
172 |
+
\sum_{n=-\infty}^\infty |c_n|^2 = \frac{1}{2\pi}\int_{-\pi}^\pi x^2 dx,
|
173 |
+
$$
|
174 |
+
|
175 |
+
where
|
176 |
+
|
177 |
+
<math>\begin{align}
|
178 |
+
|
179 |
+
c_n &= \frac{1}{2\pi}\int_{-\pi}^\pi x e^{-inx} dx \\[4pt]
|
180 |
+
|
181 |
+
&= \frac{n\pi \cos(n\pi)-\sin(n\pi)}{\pi n^2} i \\[4pt]
|
182 |
+
|
183 |
+
&= \frac{\cos(n\pi)}{n} i \\[4pt]
|
184 |
+
|
185 |
+
&= \frac{(-1)^n}{n} i
|
186 |
+
|
187 |
+
\end{align}</math>
|
188 |
+
|
189 |
+
for n ≠ 0, and c<sub>0</sub> = 0. Thus,
|
190 |
+
|
191 |
+
<math>|c_n|^2 = \begin{cases}
|
192 |
+
|
193 |
+
\dfrac{1}{n^2}, & \text{for } n \neq 0, \\
|
194 |
+
|
195 |
+
0, & \text{for } n = 0,
|
196 |
+
|
197 |
+
\end{cases}
|
198 |
+
|
199 |
+
</math>
|
200 |
+
|
201 |
+
and
|
202 |
+
$$
|
203 |
+
\sum_{n=-\infty}^\infty |c_n|^2 = 2\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{2\pi} \int_{-\pi}^\pi x^2 dx.
|
204 |
+
$$
|
205 |
+
|
206 |
+
Therefore,
|
207 |
+
$$
|
208 |
+
\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{4\pi}\int_{-\pi}^\pi x^2 dx = \frac{\pi^2}{6}
|
209 |
+
$$
|
210 |
+
|
211 |
+
as required.
|
212 |
+
|
213 |
+
Given a complete orthonormal basis in the space $L^2_{\operatorname{per}}(0, 1)$ of L2 periodic functions over $(0, 1)$ (i.e., the subspace of square-integrable functions which are also periodic), denoted by $\{e_i\}_{i=-\infty}^{\infty}$, Parseval's identity tells us that
|
214 |
+
$$
|
215 |
+
\|x\|^2 = \sum_{i=-\infty}^{\infty} |\langle e_i, x\rangle|^2,
|
216 |
+
$$
|
217 |
+
|
218 |
+
where $\|x\| := \sqrt{\langle x,x\rangle}$ is defined in terms of the inner product on this Hilbert space given by
|
219 |
+
$$
|
220 |
+
\langle f, g\rangle = \int_0^1 f(x) \overline{g(x)} dx,\ f,g \in L^2_{\operatorname{per}}(0, 1).
|
221 |
+
$$
|
222 |
+
|
223 |
+
We can consider the orthonormal basis on this space defined by $e_k \equiv e_k(\vartheta) := \exp(2\pi\imath k \vartheta)$ such that $\langle e_k,e_j\rangle = \int_0^1 e^{2\pi\imath (k-j) \vartheta} d\vartheta = \delta_{k,j}$. Then if we take $f(\vartheta) := \vartheta$, we can compute both that
|
224 |
+
|
225 |
+
<math>
|
226 |
+
|
227 |
+
\begin{align}
|
228 |
+
|
229 |
+
\|f\|^2 & = \int_0^1 \vartheta^2 d\vartheta = \frac{1}{3} \\
|
230 |
+
|
231 |
+
\langle f, e_k\rangle & = \int_0^1 \vartheta e^{-2\pi\imath k\vartheta} d\vartheta = \Biggl\{\begin{array}{ll} \frac{1}{2}, & k = 0 \\ -\frac{1}{2\pi\imath k} & k \neq 0, \end{array}
|
232 |
+
|
233 |
+
\end{align}
|
234 |
+
|
235 |
+
</math>
|
236 |
+
|
237 |
+
by elementary calculus and integration by parts, respectively. Finally, by Parseval's identity stated in the form above, we obtain that
|
238 |
+
|
239 |
+
<math>
|
240 |
+
|
241 |
+
\begin{align}
|
242 |
+
|
243 |
+
\|f\|^2 = \frac{1}{3} & = \sum_{\stackrel{k=-\infty}{k \neq 0}}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4}
|
244 |
+
|
245 |
+
= 2 \sum_{k=1}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} \\
|
246 |
+
|
247 |
+
& \implies \frac{\pi^2}{6} = \frac{2 \pi^2}{3} - \frac{\pi^2}{2} = \zeta(2).
|
248 |
+
|
249 |
+
\end{align}
|
250 |
+
|
251 |
+
</math>
|
252 |
+
|
253 |
+
Note that by considering higher-order powers of $f_j(\vartheta) := \vartheta^j \in L^2_{\operatorname{per}}(0, 1)$ we can use integration by parts to extend this method to enumerating formulas for $\zeta(2j)$ when $j > 1$. In particular, suppose we let
|
254 |
+
$$
|
255 |
+
I_{j,k} := \int_0^1 \vartheta^j e^{-2\pi\imath k\vartheta} d\vartheta,
|
256 |
+
$$
|
257 |
+
|
258 |
+
so that integration by parts yields the recurrence relation that
|
259 |
+
|
260 |
+
<math>
|
261 |
+
|
262 |
+
\begin{align}
|
263 |
+
|
264 |
+
I_{j,k} & = \Biggl\{\begin{array}{ll} \frac{1}{j+1}, & k=0; \\ -\frac{1}{2\pi\imath \cdot k} + \frac{j}{2\pi\imath \cdot k} I_{j-1,k}, & k \neq 0\end{array} \\
|
265 |
+
|
266 |
+
& = \Biggl\{\begin{array}{ll} \frac{1}{j+1}, & k=0; \\ -\sum\limits_{m=1}^{j} \frac{j!}{(j+1-m)!} \cdot \frac{1}{(2\pi\imath \cdot k)^{m}}, & k \neq 0\end{array}.
|
267 |
+
|
268 |
+
\end{align}
|
269 |
+
|
270 |
+
</math>
|
271 |
+
|
272 |
+
Then by applying Parseval's identity as we did for the first case above along with the linearity of the inner product yields that
|
273 |
+
|
274 |
+
<math>
|
275 |
+
|
276 |
+
\begin{align}
|
277 |
+
|
278 |
+
\|f_j\|^2 = \frac{1}{2j+1} & = 2 \sum_{k \geq 1} I_{j,k} \bar{I}_{j,k} + \frac{1}{(j+1)^2} \\
|
279 |
+
|
280 |
+
& = 2 \sum_{m=1}^j \sum_{r=1}^j \frac{j!^2}{(j+1-m)! (j+1-r)!} \frac{(-1)^r}{\imath^{m+r}} \frac{\zeta(m+r)}{(2\pi)^{m+r}} + \frac{1}{(j+1)^2}.
|
281 |
+
|
282 |
+
\end{align}
|
283 |
+
|
284 |
+
</math>
|
285 |
+
|
286 |
+
While most proofs use results from advanced mathematics, such as Fourier analysis, complex analysis, and multivariable calculus, the following does not even require single-variable calculus (until a single limit is taken at the end).
|
287 |
+
|
288 |
+
For a proof using the residue theorem, see the linked article.
|
289 |
+
|
290 |
+
The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of Akiva and Isaak Yaglom "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka, attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s".
|
291 |
+
|
292 |
+
[[File:limit circle FbN.jpeg|thumb|The inequality<br>
|
293 |
+
$$
|
294 |
+
\tfrac{1}{2}r^2\tan\theta > \tfrac{1}{2}r^2\theta > \tfrac{1}{2}r^2\sin\theta
|
295 |
+
$$<br>
|
296 |
+
|
297 |
+
is shown. Taking reciprocals and squaring gives<br>
|
298 |
+
$$
|
299 |
+
\cot^2\theta<\tfrac{1}{\theta^2}<\csc^2\theta
|
300 |
+
$$.]]
|
301 |
+
|
302 |
+
The main idea behind the proof is to bound the partial (finite) sums
|
303 |
+
$$
|
304 |
+
\sum_{k=1}^m \frac{1}{k^2} = \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}
|
305 |
+
$$
|
306 |
+
|
307 |
+
between two expressions, each of which will tend to pi<sup>2</sup>/6 as m approaches infinity. The two expressions are derived from identities involving the cotangent and cosecant functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities.
|
308 |
+
|
309 |
+
Let x be a real number with 0 < x < pi/2, and let n be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have
|
310 |
+
|
311 |
+
<math>\begin{align}
|
312 |
+
|
313 |
+
\frac{\cos (nx) + i \sin (nx)}{\sin^n x} &= \frac{(\cos x + i\sin x)^n}{\sin^n x} \\[4pt]
|
314 |
+
|
315 |
+
&= \left(\frac{\cos x + i \sin x}{\sin x}\right)^n \\[4pt]
|
316 |
+
|
317 |
+
&= (\cot x + i)^n.
|
318 |
+
|
319 |
+
\end{align}</math>
|
320 |
+
|
321 |
+
From the binomial theorem, we have
|
322 |
+
|
323 |
+
<math>\begin{align}
|
324 |
+
|
325 |
+
(\cot x + i)^n
|
326 |
+
|
327 |
+
= & {n \choose 0} \cot^n x + {n \choose 1} (\cot^{n - 1} x)i + \cdots + {n \choose {n - 1}} (\cot x)i^{n - 1} + {n \choose n} i^n \\[6pt]
|
328 |
+
|
329 |
+
= & \Bigg( {n \choose 0} \cot^n x - {n \choose 2} \cot^{n - 2} x \pm \cdots \Bigg) + i\Bigg( {n \choose 1} \cot^{n-1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg).
|
330 |
+
|
331 |
+
\end{align}</math>
|
332 |
+
|
333 |
+
Combining the two equations and equating imaginary parts gives the identity
|
334 |
+
$$
|
335 |
+
\frac{\sin (nx)}{\sin^n x} = \Bigg( {n \choose 1} \cot^{n - 1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg).
|
336 |
+
$$
|
337 |
+
|
338 |
+
We take this identity, fix a positive integer m, set n = 2m + 1, and consider x<sub>r</sub> = rpi/2m + 1 for r = 1, 2, ..., m. Then nx<sub>r</sub> is a multiple of pi and therefore sin(nx<sub>r</sub>) = 0. So,
|
339 |
+
$$
|
340 |
+
0 = {{2m + 1} \choose 1} \cot^{2m} x_r - {{2m + 1} \choose 3} \cot^{2m - 2} x_r \pm \cdots + (-1)^m{{2m + 1} \choose {2m + 1}}
|
341 |
+
$$
|
342 |
+
|
343 |
+
for every r = 1, 2, ..., m. The values x<sub>r</sub> = x<sub>1</sub>, x<sub>2</sub>, ..., x<sub>m</sub> are distinct numbers in the interval 0 < . Since the function cot<sup>2</sup> x is one-to-one on this interval, the numbers t<sub>r</sub> = cot<sup>2</sup> x<sub>r</sub> are distinct for r = 1, 2, ..., m. By the above equation, these m numbers are the roots of the mth degree polynomial
|
344 |
+
$$
|
345 |
+
p(t) = {{2m + 1} \choose 1}t^m - {{2m + 1} \choose 3}t^{m - 1} \pm \cdots + (-1)^m{{2m+1} \choose {2m + 1}}.
|
346 |
+
$$
|
347 |
+
|
348 |
+
By Vieta's formulas we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that
|
349 |
+
$$
|
350 |
+
\cot ^2 x_1 + \cot ^2 x_2 + \cdots + \cot ^2 x_m = \frac{\binom{2m + 1}3} {\binom{2m + 1}1} = \frac{2m(2m - 1)}6.
|
351 |
+
$$
|
352 |
+
|
353 |
+
Substituting the identity csc<sup>2</sup> x = cot<sup>2</sup> x + 1, we have
|
354 |
+
$$
|
355 |
+
\csc ^2 x_1 + \csc ^2 x_2 + \cdots + \csc ^2 x_m = \frac{2m(2m - 1)}6 + m = \frac{2m(2m + 2)}6.
|
356 |
+
$$
|
357 |
+
|
358 |
+
Now consider the inequality cot<sup>2</sup> x < 1/x<sup>2</sup> < csc<sup>2</sup> x (illustrated geometrically above). If we add up all these inequalities for each of the numbers x<sub>r</sub> = rpi/2m + 1, and if we use the two identities above, we get
|
359 |
+
$$
|
360 |
+
\frac{2m(2m - 1)}6 < \left(\frac{2m + 1}{\pi} \right)^2 + \left(\frac{2m + 1}{2\pi} \right)^2 + \cdots + \left(\frac{2m + 1}{m \pi} \right)^2 < \frac{2m(2m + 2)}6.
|
361 |
+
$$
|
362 |
+
|
363 |
+
Multiplying through by <big><big>(</big></big>pi/2m + 1<big><big>)</big></big>_2, this becomes
|
364 |
+
$$
|
365 |
+
\frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m - 1}{2m + 1}\right) < \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} < \frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m + 2}{2m + 1}\right).
|
366 |
+
$$
|
367 |
+
|
368 |
+
As m approaches infinity, the left and right hand expressions each approach pi<sup>2</sup>/6, so by the squeeze theorem,
|
369 |
+
|
370 |
+
<math>\zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2} =
|
371 |
+
|
372 |
+
\lim_{m \to \infty}\left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}\right) = \frac{\pi ^2}{6}</math>
|
373 |
+
|
374 |
+
and this completes the proof.
|
375 |
+
|
376 |
+
See the special cases of the identities for the Riemann zeta function when $s = 2.$ Other notably special identities and representations of this constant appear in the sections below.
|
377 |
+
|
378 |
+
The following are series representations of the constant:
|
379 |
+
|
380 |
+
<math>\begin{align}
|
381 |
+
|
382 |
+
\zeta(2) &= 3 \sum_{k=1}^\infty \frac{1}{k^2 \binom{2k}{k}} \\
|
383 |
+
|
384 |
+
&= \sum_{i=1}^\infty \sum_{j=1}^\infty \frac{(i-1)! (j-1)!}{(i+j)!}. \\
|
385 |
+
|
386 |
+
\end{align}</math>
|
387 |
+
|
388 |
+
There are also BBP-type series expansions for ζ(2).
|
389 |
+
$$
|
390 |
+
\frac{\zeta(2)}{2} = \cfrac{1}{v_1 - \cfrac{1^4}{v_2-\cfrac{2^4}{v_3-\cfrac{3^4}{v_4-\ddots}}}},
|
391 |
+
$$
|
392 |
+
|
393 |
+
and
|
394 |
+
$$
|
395 |
+
\frac{\zeta(2)}{5} = \cfrac{1}{\widetilde{v}_1 - \cfrac{1^4}{\widetilde{v}_2-\cfrac{2^4}{\widetilde{v}_3-\cfrac{3^4}{\widetilde{v}_4-\ddots}}}},
|
396 |
+
$$
|
397 |
+
|
398 |
+
where $v_n = 2n-1 \mapsto \{1,3,5,7,9,\ldots\}$ and $\widetilde{v}_n = 11n^2-11n+3 \mapsto \{3,25,69,135,\ldots\}$.
|
wiki/wikipedia/1.txt
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In real analysis and measure theory, the Vitali convergence theorem, named after the Italian mathematician Giuseppe Vitali, is a generalization of the better-known dominated convergence theorem of Henri Lebesgue. It is a characterization of the convergence in L<sup>p</sup> in terms of convergence in measure and a condition related to uniform integrability.
|
2 |
+
|
3 |
+
Let $(X,\mathcal{A},\mu)$ be a measure space, i.e. $\mu : \mathcal{A}\to [0,\infty]$ is a set function such that $\mu(\emptyset)=0$ and $\mu$ is countably-additive. All functions considered in the sequel will be functions $f:X\to \mathbb{K}$, where $\mathbb{K}=\R$ or $\mathbb{C}$. We adopt the following definitions according to Bogachev's terminology.
|
4 |
+
|
5 |
+
* A set of functions $\mathcal{F} \subset L^1(X,\mathcal{A},\mu)$ is called uniformly integrable if $\lim_{M\to+\infty} \sup_{f\in\mathcal{F}} \int_{\{|f|>M\}} |f| d\mu = 0$, i.e <math>\forall\ \varepsilon >0,\ \exists\ M_\varepsilon>0
|
6 |
+
|
7 |
+
\sup_{f\in\mathcal{F}} \int_{\{|f|\geq M_\varepsilon\}} |f| d\mu < \varepsilon</math>.
|
8 |
+
|
9 |
+
* A set of functions $\mathcal{F} \subset L^1(X,\mathcal{A},\mu)$ is said to have uniformly absolutely continuous integrals if $\lim_{\mu(A)\to 0}\sup_{f\in\mathcal{F}} \int_A |f| d\mu = 0$, i.e. <math>\forall\ \varepsilon>0,\ \exists\ \delta_\varepsilon >0,\ \forall\ A\in\mathcal{A} :
|
10 |
+
|
11 |
+
\mu(A)<\delta_\varepsilon \Rightarrow \sup_{f\in \mathcal{F}} \int_A |f| d\mu < \varepsilon</math>. This definition is sometimes used as a definition of uniform integrability. However, it differs from the definition of uniform integrability given above.
|
12 |
+
|
13 |
+
When $\mu(X)<\infty$, a set of functions $\mathcal{F} \subset L^1(X,\mathcal{A},\mu)$ is uniformly integrable if and only if it is bounded in $L^1(X,\mathcal{A},\mu)$ and has uniformly absolutely continuous integrals. If, in addition, $\mu$ is atomless, then the uniform integrability is equivalent to the uniform absolute continuity of integrals.
|
14 |
+
|
15 |
+
Let $(X,\mathcal{A},\mu)$ be a measure space with $\mu(X)<\infty$. Let $(f_n)\subset L^p(X,\mathcal{A},\mu)$ and $f$ be an $\mathcal{A}$-measurable function. Then, the following are equivalent :
|
16 |
+
|
17 |
+
# $f\in L^p(X,\mathcal{A},\mu)$ and $(f_n)$ converges to $f$ in $L^p(X,\mathcal{A},\mu)$ ;
|
18 |
+
|
19 |
+
# The sequence of functions $(f_n)$ converges in $\mu$-measure to $f$ and $(|f_n|^p)_{n\geq 1}$ is uniformly integrable ;
|
20 |
+
|
21 |
+
For a proof, see Bogachev's monograph "Measure Theory, Volume I".
|
22 |
+
|
23 |
+
Let $(X,\mathcal{A},\mu)$ be a measure space and $1\leq p<\infty$. Let $(f_n)_{n\geq 1} \subseteq L^p(X,\mathcal{A},\mu)$ and $f\in L^p(X,\mathcal{A},\mu)$. Then, $(f_n)$ converges to $f$ in $L^p(X,\mathcal{A},\mu)$ if and only if the following holds :
|
24 |
+
|
25 |
+
# The sequence of functions $(f_n)$ converges in $\mu$-measure to $f$ ;
|
26 |
+
|
27 |
+
#$(f_n)$ has uniformly absolutely continuous integrals;
|
28 |
+
|
29 |
+
# For every $\varepsilon>0$, there exists $X_\varepsilon\in \mathcal{A}$ such that $\mu(X_\varepsilon)<\infty$ and $\sup_{n\geq 1}\int_{X\setminus X_\varepsilon} |f_n|^p d\mu <\varepsilon.$
|
30 |
+
|
31 |
+
When $\mu(X)<\infty$, the third condition becomes superfluous (one can simply take $X_\varepsilon = X$) and the first two conditions give the usual form of Lebesgue-Vitali's convergence theorem originally stated for measure spaces with finite measure. In this case, one can show that conditions 1 and 2 imply that the sequence $(|f_n|^p)_{n\geq 1}$ is uniformly integrable.
|
32 |
+
|
33 |
+
Let $(X,\mathcal{A},\mu)$ be measure space. Let $(f_n)_{n\geq 1} \subseteq L^1(X,\mathcal{A},\mu)$ and assume that $\lim_{n\to\infty}\int_A f_nd\mu$ exists for every $A\in\mathcal{A}$. Then, the sequence $(f_n)$ is bounded in $L^1(X,\mathcal{A},\mu)$ and has uniformly absolutely continuous integrals. In addition, there exists $f\in L^1(X,\mathcal{A},\mu)$ such that $\lim_{n\to\infty}\int_A f_nd\mu = \int_A f d\mu$ for every $A\in\mathcal{A}$.
|
34 |
+
|
35 |
+
When $\mu(X)<\infty$, this implies that $(f_n)$ is uniformly integrable.
|
36 |
+
|
37 |
+
For a proof, see Bogachev's monograph "Measure Theory, Volume I".
|
wiki/wikipedia/10.txt
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Jean-Yves Girard (; born 1947) is a French logician working in proof theory. He is the research director (emeritus) at the mathematical institute of the University of Aix-Marseille, at Luminy.
|
2 |
+
|
3 |
+
Jean-Yves Girard is an alumnus of the École normale supérieure de Saint-Cloud.
|
4 |
+
|
5 |
+
He made a name for himself in the 1970s with his proof of strong normalization in a system of second-order logic called System F. This result gave a new proof of Takeuti's conjecture, which was proven a few years earlier by William W. Tait, Motō Takahashi and Dag Prawitz. For this purpose, he introduced the notion of "reducibility candidate" ("candidat de réducibilité"). He is also credited with the discovery of Girard's paradox, linear logic, the geometry of interaction, ludics, and (satirically) the mustard watch.
|
6 |
+
|
7 |
+
He obtained the CNRS Silver medal in 1983 and is a member of the French Academy of Sciences.
|
8 |
+
|
9 |
+
*
|
10 |
+
|
11 |
+
*
|
12 |
+
|
13 |
+
*
|
14 |
+
|
15 |
+
*
|
wiki/wikipedia/100.txt
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another,
|
2 |
+
|
3 |
+
or some other properties. A number of important variations are described below.
|
4 |
+
|
5 |
+
An urn model is either a set of probabilities that describe events within an urn problem, or it is a probability distribution, or a family of such distributions, of random variables associated with urn problems.
|
6 |
+
|
7 |
+
In Ars Conjectandi (1713), Jacob Bernoulli considered the problem of determining, given a number of pebbles drawn from an urn, the proportions of different colored pebbles within the urn. This problem was known as the inverse probability problem, and was a topic of research in the eighteenth century, attracting the attention of Abraham de Moivre and Thomas Bayes.
|
8 |
+
|
9 |
+
Bernoulli used the Latin word urna, which primarily means a clay vessel, but is also the term used in ancient Rome for a vessel of any kind for collecting ballots or lots; the present-day Italian word for ballot box is still urna. Bernoulli's inspiration may have been lotteries, elections, or games of chance which involved drawing balls from a container, and it has been asserted that elections in medieval and renaissance Venice, including that of the doge, often included the choice of electors by lot, using balls of different colors drawn from an urn.
|
10 |
+
|
11 |
+
In this basic urn model in probability theory, the urn contains x white and y black balls, well-mixed together. One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated.
|
12 |
+
|
13 |
+
Possible questions that can be answered in this model are:
|
14 |
+
|
15 |
+
* Can I infer the proportion of white and black balls from n observations? With what degree of confidence?
|
16 |
+
|
17 |
+
* Knowing x and y, what is the probability of drawing a specific sequence (e.g. one white followed by one black)?
|
18 |
+
|
19 |
+
* If I only observe n balls, how sure can I be that there are no black balls? (A variation on the first question)
|
20 |
+
|
21 |
+
* beta-binomial distribution: as above, except that every time a ball is observed, an additional ball of the same color is added to the urn. Hence, the number of total balls in the urn grows. See Pólya urn model.
|
22 |
+
|
23 |
+
* binomial distribution: the distribution of the number of successful draws (trials), i.e. extraction of white balls, given n draws with replacement in an urn with black and white balls.
|
24 |
+
|
25 |
+
* Hoppe urn: a Pólya urn with an additional ball called the mutator. When the mutator is drawn it is replaced along with an additional ball of an entirely new colour.
|
26 |
+
|
27 |
+
* hypergeometric distribution: the balls are not returned to the urn once extracted. Hence, the number of total marbles in the urn decreases. This is referred to as "drawing without replacement", by opposition to "drawing with replacement".
|
28 |
+
|
29 |
+
* multivariate hypergeometric distribution: as above, but with balls of more than two colors.
|
30 |
+
|
31 |
+
* geometric distribution: number of draws before the first successful (correctly colored) draw.
|
32 |
+
|
33 |
+
* multinomial distribution: the urn contains balls in more than two colors.
|
34 |
+
|
35 |
+
* negative binomial distribution: number of draws before a certain number of failures (incorrectly colored draws) occurs.
|
36 |
+
|
37 |
+
* : the distribution of the number of occupied urns after the random assignment of k balls into n urns, related to the coupon collector's problem and birthday problem.
|
38 |
+
|
39 |
+
* Pólya urn: each time a ball of a particular colour is drawn, it is replaced along with an additional ball of the same colour.
|
40 |
+
|
41 |
+
* Statistical physics: derivation of energy and velocity distributions.
|
42 |
+
|
43 |
+
* The Ellsberg paradox.
|
wiki/wikipedia/1000.txt
ADDED
@@ -0,0 +1,450 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form
|
2 |
+
$$
|
3 |
+
ax^2 + bx + c
|
4 |
+
$$
|
5 |
+
|
6 |
+
to the form
|
7 |
+
$$
|
8 |
+
a(x-h)^2 + k
|
9 |
+
$$
|
10 |
+
|
11 |
+
for some values of h and k.
|
12 |
+
|
13 |
+
Completing the square is used in
|
14 |
+
|
15 |
+
* solving quadratic equations,
|
16 |
+
|
17 |
+
* deriving the quadratic formula,
|
18 |
+
|
19 |
+
* graphing quadratic functions,
|
20 |
+
|
21 |
+
* evaluating integrals in calculus, such as Gaussian integrals with a linear term in the exponent,
|
22 |
+
|
23 |
+
* finding Laplace transforms.
|
24 |
+
|
25 |
+
In mathematics, completing the square is often applied in any computation involving quadratic polynomials.
|
26 |
+
|
27 |
+
The formula in elementary algebra for computing the square of a binomial is:
|
28 |
+
$$
|
29 |
+
(x + p)^2 = x^2 + 2px + p^2.
|
30 |
+
$$
|
31 |
+
|
32 |
+
For example:
|
33 |
+
|
34 |
+
<math>\begin{alignat}{2}
|
35 |
+
|
36 |
+
(x+3)^2 &= x^2 + 6x + 9 && (p=3)\\[3pt]
|
37 |
+
|
38 |
+
(x-5)^2 &= x^2 - 10x + 25\qquad && (p=-5).
|
39 |
+
|
40 |
+
\end{alignat}
|
41 |
+
|
42 |
+
</math>
|
43 |
+
|
44 |
+
In any perfect square, the coefficient of x is twice the number p, and the constant term is equal to p<sup>2</sup>.
|
45 |
+
|
46 |
+
Consider the following quadratic polynomial:
|
47 |
+
$$
|
48 |
+
x^2 + 10x + 28.
|
49 |
+
$$
|
50 |
+
|
51 |
+
This quadratic is not a perfect square, since 28 is not the square of 5:
|
52 |
+
$$
|
53 |
+
(x+5)^2 = x^2 + 10x + 25.
|
54 |
+
$$
|
55 |
+
|
56 |
+
However, it is possible to write the original quadratic as the sum of this square and a constant:
|
57 |
+
$$
|
58 |
+
x^2 + 10x + 28 = (x+5)^2 + 3.
|
59 |
+
$$
|
60 |
+
|
61 |
+
This is called completing the square.
|
62 |
+
|
63 |
+
Given any monic quadratic
|
64 |
+
$$
|
65 |
+
x^2 + bx + c,
|
66 |
+
$$
|
67 |
+
|
68 |
+
it is possible to form a square that has the same first two terms:
|
69 |
+
$$
|
70 |
+
\left(x+\tfrac{1}{2} b\right)^2 = x^2 + bx + \tfrac{1}{4}b^2.
|
71 |
+
$$
|
72 |
+
|
73 |
+
This square differs from the original quadratic only in the value of the constant
|
74 |
+
|
75 |
+
term. Therefore, we can write
|
76 |
+
$$
|
77 |
+
x^2 + bx + c = \left(x + \tfrac{1}{2}b\right)^2 + k,
|
78 |
+
$$
|
79 |
+
|
80 |
+
where $k = c - \frac{b^2}{4}$. This operation is known as completing the square.
|
81 |
+
|
82 |
+
For example:
|
83 |
+
|
84 |
+
<math>\begin{alignat}{1}
|
85 |
+
|
86 |
+
x^2 + 6x + 11 &= (x+3)^2 + 2 \\[3pt]
|
87 |
+
|
88 |
+
x^2 + 14x + 30 &= (x+7)^2 - 19 \\[3pt]
|
89 |
+
|
90 |
+
x^2 - 2x + 7 &= (x-1)^2 + 6.
|
91 |
+
|
92 |
+
\end{alignat}
|
93 |
+
|
94 |
+
</math>
|
95 |
+
|
96 |
+
Given a quadratic polynomial of the form
|
97 |
+
$$
|
98 |
+
ax^2 + bx + c
|
99 |
+
$$
|
100 |
+
|
101 |
+
it is possible to factor out the coefficient a, and then complete the square for the resulting monic polynomial.
|
102 |
+
|
103 |
+
Example:
|
104 |
+
|
105 |
+
<math>
|
106 |
+
|
107 |
+
\begin{align}
|
108 |
+
|
109 |
+
3x^2 + 12x + 27 &= 3[x^2+4x+9]\\
|
110 |
+
|
111 |
+
&{}= 3\left[(x+2)^2 + 5\right]\\
|
112 |
+
|
113 |
+
&{}= 3(x+2)^2 + 3(5)\\
|
114 |
+
|
115 |
+
&{}= 3(x+2)^2 + 15
|
116 |
+
|
117 |
+
\end{align}</math>
|
118 |
+
|
119 |
+
This process of factoring out the coefficient a can further be simplified by only factorising it out of the first 2 terms. The integer at the end of the polynomial does not have to be included.
|
120 |
+
|
121 |
+
Example:
|
122 |
+
|
123 |
+
<math>
|
124 |
+
|
125 |
+
\begin{align}
|
126 |
+
|
127 |
+
3x^2 + 12x + 27 &= 3[x^2+4x] + 27\\
|
128 |
+
|
129 |
+
&{}= 3\left[(x+2)^2 -4\right] + 27\\
|
130 |
+
|
131 |
+
&{}= 3(x+2)^2 + 3(-4) + 27\\
|
132 |
+
|
133 |
+
&{}= 3(x+2)^2 - 12 + 27\\
|
134 |
+
|
135 |
+
&{}= 3(x+2)^2 + 15
|
136 |
+
|
137 |
+
\end{align}</math>
|
138 |
+
|
139 |
+
This allows the writing of any quadratic polynomial in the form
|
140 |
+
$$
|
141 |
+
a(x-h)^2 + k.
|
142 |
+
$$
|
143 |
+
|
144 |
+
The result of completing the square may be written as a formula. In the general case, one has
|
145 |
+
$$
|
146 |
+
ax^2 + bx + c = a(x-h)^2 + k,
|
147 |
+
$$
|
148 |
+
|
149 |
+
with
|
150 |
+
$$
|
151 |
+
h = -\frac{b}{2a} \quad\text{and}\quad k = c - ah^2 = c - \frac{b^2}{4a}.
|
152 |
+
$$
|
153 |
+
|
154 |
+
In particular, when a = 1, one has
|
155 |
+
$$
|
156 |
+
x^2 + bx + c = (x-h)^2 + k,
|
157 |
+
$$
|
158 |
+
|
159 |
+
with
|
160 |
+
$$
|
161 |
+
h = -\frac{b}{2} \quad\text{and}\quad k = c - h^2 = c - \frac{b^2}{4a}.
|
162 |
+
$$
|
163 |
+
|
164 |
+
By solving the equation $a(x-h)^2 + k=0$ in terms of $x-h,$ and reorganizing the resulting expression, one gets the quadratic formula for the roots of the quadratic equation:
|
165 |
+
$$
|
166 |
+
x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}.
|
167 |
+
$$
|
168 |
+
|
169 |
+
The matrix case looks very similar:
|
170 |
+
$$
|
171 |
+
x^{\mathrm{T}}Ax + x^{\mathrm{T}}b + c = (x - h)^{\mathrm{T}}A(x - h) + k \quad\text{where}\quad h = -\frac{1}{2}A^{-1}b \quad\text{and}\quad k = c - \frac{1}{4}b^{\mathrm{T}}A^{-1}b
|
172 |
+
$$
|
173 |
+
|
174 |
+
where $A$ has to be symmetric.
|
175 |
+
|
176 |
+
If $A$ is not symmetric the formulae for $h$ and $k$ have
|
177 |
+
|
178 |
+
to be generalized to:
|
179 |
+
$$
|
180 |
+
h = -(A+A^{\mathrm{T}})^{-1}b \quad\text{and}\quad k = c - h^{\mathrm{T}}A h = c - b^{\mathrm{T}} (A+A^{\mathrm{T}})^{-1} A (A+A^{\mathrm{T}})^{-1}b
|
181 |
+
$$.
|
182 |
+
|
183 |
+
In analytic geometry, the graph of any quadratic function is a parabola in the xy-plane. Given a quadratic polynomial of the form
|
184 |
+
$$
|
185 |
+
a(x-h)^2 + k
|
186 |
+
$$
|
187 |
+
|
188 |
+
the numbers h and k may be interpreted as the Cartesian coordinates of the vertex (or stationary point) of the parabola. That is, h is the x-coordinate of the axis of symmetry (i.e. the axis of symmetry has equation x = h), and k is the minimum value (or maximum value, if a < 0) of the quadratic function.
|
189 |
+
|
190 |
+
One way to see this is to note that the graph of the function ƒ(x) = x<sup>2</sup> is a parabola whose vertex is at the origin (0, 0). Therefore, the graph of the function ƒ(x - h) = (x - h)<sup>2</sup> is a parabola shifted to the right by h whose vertex is at (h, 0), as shown in the top figure. In contrast, the graph of the function ƒ(x) + k = x<sup>2</sup> + k is a parabola shifted upward by k whose vertex is at (0, k), as shown in the center figure. Combining both horizontal and vertical shifts yields ƒ(x - h) + k = (x - h)<sup>2</sup> + k is a parabola shifted to the right by h and upward by k whose vertex is at (h, k), as shown in the bottom figure.
|
191 |
+
|
192 |
+
Completing the square may be used to solve any quadratic equation. For example:
|
193 |
+
$$
|
194 |
+
x^2 + 6x + 5 = 0.
|
195 |
+
$$
|
196 |
+
|
197 |
+
The first step is to complete the square:
|
198 |
+
$$
|
199 |
+
(x+3)^2 - 4 = 0.
|
200 |
+
$$
|
201 |
+
|
202 |
+
Next we solve for the squared term:
|
203 |
+
$$
|
204 |
+
(x+3)^2 = 4.
|
205 |
+
$$
|
206 |
+
|
207 |
+
Then either
|
208 |
+
$$
|
209 |
+
x+3 = -2 \quad\text{or}\quad x+3 = 2,
|
210 |
+
$$
|
211 |
+
|
212 |
+
and therefore
|
213 |
+
$$
|
214 |
+
x = -5 \quad\text{or}\quad x = -1.
|
215 |
+
$$
|
216 |
+
|
217 |
+
This can be applied to any quadratic equation. When the x<sup>2</sup> has a coefficient other than 1, the first step is to divide out the equation by this coefficient: for an example see the non-monic case below.
|
218 |
+
|
219 |
+
Unlike methods involving factoring the equation, which is reliable only if the roots are rational, completing the square will find the roots of a quadratic equation even when those roots are irrational or complex. For example, consider the equation
|
220 |
+
$$
|
221 |
+
x^2 - 10x + 18 = 0.
|
222 |
+
$$
|
223 |
+
|
224 |
+
Completing the square gives
|
225 |
+
$$
|
226 |
+
(x-5)^2 - 7 = 0,
|
227 |
+
$$
|
228 |
+
|
229 |
+
so
|
230 |
+
$$
|
231 |
+
(x-5)^2 = 7.
|
232 |
+
$$
|
233 |
+
|
234 |
+
Then either
|
235 |
+
$$
|
236 |
+
x-5 = -\sqrt{7} \quad\text{or}\quad x-5 = \sqrt{7}.
|
237 |
+
$$
|
238 |
+
|
239 |
+
In terser language:
|
240 |
+
$$
|
241 |
+
x-5 = \pm \sqrt{7},
|
242 |
+
$$
|
243 |
+
|
244 |
+
so
|
245 |
+
$$
|
246 |
+
x = 5 \pm \sqrt{7}.
|
247 |
+
$$
|
248 |
+
|
249 |
+
Equations with complex roots can be handled in the same way. For example:
|
250 |
+
|
251 |
+
<math>\begin{array}{c}
|
252 |
+
|
253 |
+
x^2 + 4x + 5 = 0 \\[6pt]
|
254 |
+
|
255 |
+
(x+2)^2 + 1 = 0 \\[6pt]
|
256 |
+
|
257 |
+
(x+2)^2 = -1 \\[6pt]
|
258 |
+
|
259 |
+
x+2 = \pm i \\[6pt]
|
260 |
+
|
261 |
+
x = -2 \pm i.
|
262 |
+
|
263 |
+
\end{array}
|
264 |
+
|
265 |
+
</math>
|
266 |
+
|
267 |
+
For an equation involving a non-monic quadratic, the first step to solving them is to divide through by the coefficient of x<sup>2</sup>. For example:
|
268 |
+
|
269 |
+
<math>\begin{array}{c}
|
270 |
+
|
271 |
+
2x^2 + 7x + 6 = 0 \\[6pt]
|
272 |
+
|
273 |
+
x^2 + \tfrac{7}{2}x + 3 = 0 \\[6pt]
|
274 |
+
|
275 |
+
\left(x+\tfrac{7}{4}\right)^2 - \tfrac{1}{16} = 0 \\[6pt]
|
276 |
+
|
277 |
+
\left(x+\tfrac{7}{4}\right)^2 = \tfrac{1}{16} \\[6pt]
|
278 |
+
|
279 |
+
x+\tfrac{7}{4} = \tfrac{1}{4} \quad\text{or}\quad x+\tfrac{7}{4} = -\tfrac{1}{4} \\[6pt]
|
280 |
+
|
281 |
+
x = -\tfrac{3}{2} \quad\text{or}\quad x = -2.
|
282 |
+
|
283 |
+
\end{array}
|
284 |
+
|
285 |
+
</math>
|
286 |
+
|
287 |
+
Applying this procedure to the general form of a quadratic equation leads to the quadratic formula.
|
288 |
+
|
289 |
+
Completing the square may be used to evaluate any integral of the form
|
290 |
+
$$
|
291 |
+
\int\frac{dx}{ax^2+bx+c}
|
292 |
+
$$
|
293 |
+
|
294 |
+
using the basic integrals
|
295 |
+
|
296 |
+
<math>\int\frac{dx}{x^2 - a^2} = \frac{1}{2a}\ln\left|\frac{x-a}{x+a}\right| +C \quad\text{and}\quad
|
297 |
+
|
298 |
+
\int\frac{dx}{x^2 + a^2} = \frac{1}{a}\arctan\left(\frac{x}{a}\right) +C.</math>
|
299 |
+
|
300 |
+
For example, consider the integral
|
301 |
+
$$
|
302 |
+
\int\frac{dx}{x^2 + 6x + 13}.
|
303 |
+
$$
|
304 |
+
|
305 |
+
Completing the square in the denominator gives:
|
306 |
+
$$
|
307 |
+
\int\frac{dx}{(x+3)^2 + 4} = \int\frac{dx}{(x+3)^2 + 2^2}.
|
308 |
+
$$
|
309 |
+
|
310 |
+
This can now be evaluated by using the substitution
|
311 |
+
|
312 |
+
u = x + 3, which yields
|
313 |
+
$$
|
314 |
+
\int\frac{dx}{(x+3)^2 + 4} = \frac{1}{2}\arctan\left(\frac{x+3}{2}\right)+C.
|
315 |
+
$$
|
316 |
+
|
317 |
+
Consider the expression
|
318 |
+
$$
|
319 |
+
|z|^2 - b^*z - bz^* + c,
|
320 |
+
$$
|
321 |
+
|
322 |
+
where z and b are complex numbers, z<sup>*</sup> and b<sup>*</sup> are the complex conjugates of z and b, respectively, and c is a real number. Using the identity |u|<sup>2</sup> = uu<sup>*</sup> we can rewrite this as
|
323 |
+
$$
|
324 |
+
|z-b|^2 - |b|^2 + c ,
|
325 |
+
$$
|
326 |
+
|
327 |
+
which is clearly a real quantity. This is because
|
328 |
+
|
329 |
+
<math>\begin{align}
|
330 |
+
|
331 |
+
|z-b|^2 &{}= (z-b)(z-b)^*\\
|
332 |
+
|
333 |
+
&{}= (z-b)(z^*-b^*)\\
|
334 |
+
|
335 |
+
&{}= zz^* - zb^* - bz^* + bb^*\\
|
336 |
+
|
337 |
+
&{}= |z|^2 - zb^* - bz^* + |b|^2 .
|
338 |
+
|
339 |
+
\end{align}</math>
|
340 |
+
|
341 |
+
As another example, the expression
|
342 |
+
$$
|
343 |
+
ax^2 + by^2 + c ,
|
344 |
+
$$
|
345 |
+
|
346 |
+
where a, b, c, x, and y are real numbers, with a > 0 and b > 0, may be expressed in terms of the square of the absolute value of a complex number. Define
|
347 |
+
$$
|
348 |
+
z = \sqrt{a}x + i \sqrt{b} y .
|
349 |
+
$$
|
350 |
+
|
351 |
+
Then
|
352 |
+
|
353 |
+
<math>
|
354 |
+
|
355 |
+
\begin{align}
|
356 |
+
|
357 |
+
|z|^2 &{}= z z^*\\
|
358 |
+
|
359 |
+
&{}= (\sqrt{a}x + i \sqrt{b}y)(\sqrt{a}x - i \sqrt{b}y) \\
|
360 |
+
|
361 |
+
&{}= ax^2 - i\sqrt{ab}xy + i\sqrt{ba}yx - i^2by^2 \\
|
362 |
+
|
363 |
+
&{}= ax^2 + by^2 ,
|
364 |
+
|
365 |
+
\end{align}</math>
|
366 |
+
|
367 |
+
so
|
368 |
+
$$
|
369 |
+
ax^2 + by^2 + c = |z|^2 + c .
|
370 |
+
$$
|
371 |
+
|
372 |
+
A matrix M is idempotent when M<sup>2</sup> = M. Idempotent matrices generalize the idempotent properties of 0 and 1. The completion of the square method of addressing the equation
|
373 |
+
$$
|
374 |
+
a^2 + b^2 = a ,
|
375 |
+
$$
|
376 |
+
|
377 |
+
shows that some idempotent 2×2 matrices are parametrized by a circle in the (a,b)-plane:
|
378 |
+
|
379 |
+
The matrix $\begin{pmatrix}a & b \\ b & 1-a \end{pmatrix}$ will be idempotent provided $a^2 + b^2 = a ,$ which, upon completing the square, becomes
|
380 |
+
$$
|
381 |
+
(a - \tfrac{1}{2})^2 + b^2 = \tfrac{1}{4} .
|
382 |
+
$$
|
383 |
+
|
384 |
+
In the (a,b)-plane, this is the equation of a circle with center (1/2, 0) and radius 1/2.
|
385 |
+
|
386 |
+
Consider completing the square for the equation
|
387 |
+
$$
|
388 |
+
x^2 + bx = a.
|
389 |
+
$$
|
390 |
+
|
391 |
+
Since x<sup>2</sup> represents the area of a square with side of length x, and bx represents the area of a rectangle with sides b and x, the process of completing the square can be viewed as visual manipulation of rectangles.
|
392 |
+
|
393 |
+
Simple attempts to combine the x<sup>2</sup> and the bx rectangles into a larger square result in a missing corner. The term (b/2)<sup>2</sup> added to each side of the above equation is precisely the area of the missing corner, whence derives the terminology "completing the square".
|
394 |
+
|
395 |
+
As conventionally taught, completing the square consists of adding the third term, v<sup> 2</sup> to
|
396 |
+
$$
|
397 |
+
u^2 + 2uv
|
398 |
+
$$
|
399 |
+
|
400 |
+
to get a square. There are also cases in which one can add the middle term, either 2uv or -2uv, to
|
401 |
+
$$
|
402 |
+
u^2 + v^2
|
403 |
+
$$
|
404 |
+
|
405 |
+
to get a square.
|
406 |
+
|
407 |
+
By writing
|
408 |
+
|
409 |
+
<math>
|
410 |
+
|
411 |
+
\begin{align}
|
412 |
+
|
413 |
+
x + {1 \over x} &{} = \left(x - 2 + {1 \over x}\right) + 2\\
|
414 |
+
|
415 |
+
&{}= \left(\sqrt{x} - {1 \over \sqrt{x}}\right)^2 + 2
|
416 |
+
|
417 |
+
\end{align}</math>
|
418 |
+
|
419 |
+
we show that the sum of a positive number x and its reciprocal is always greater than or equal to 2. The square of a real expression is always greater than or equal to zero, which gives the stated bound; and here we achieve 2 just when x is 1, causing the square to vanish.
|
420 |
+
|
421 |
+
Consider the problem of factoring the polynomial
|
422 |
+
$$
|
423 |
+
x^4 + 324 .
|
424 |
+
$$
|
425 |
+
|
426 |
+
This is
|
427 |
+
$$
|
428 |
+
(x^2)^2 + (18)^2,
|
429 |
+
$$
|
430 |
+
|
431 |
+
so the middle term is 2(x<sup>2</sup>)(18) = 36x<sup>2</sup>. Thus we get
|
432 |
+
|
433 |
+
<math>\begin{align} x^4 + 324 &{}= (x^4 + 36x^2 + 324 ) - 36x^2 \\
|
434 |
+
|
435 |
+
&{}= (x^2 + 18)^2 - (6x)^2 =\text{a difference of two squares} \\
|
436 |
+
|
437 |
+
&{}= (x^2 + 18 + 6x)(x^2 + 18 - 6x) \\
|
438 |
+
|
439 |
+
&{}= (x^2 + 6x + 18)(x^2 - 6x + 18)
|
440 |
+
|
441 |
+
\end{align}</math>
|
442 |
+
|
443 |
+
(the last line being added merely to follow the convention of decreasing degrees of terms).
|
444 |
+
|
445 |
+
The same argument shows that $x^4 + 4a^4 $ is always factorizable as
|
446 |
+
$$
|
447 |
+
x^4 + 4a^4 =(x^2+2a x + 2a^2)(x^2-2 ax + 2a^2)
|
448 |
+
$$
|
449 |
+
|
450 |
+
(Also known as Sophie Germain's identity).
|
wiki/wikipedia/1001.txt
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
:A Shadowing lemma is also a fictional creature in the Discworld.
|
2 |
+
|
3 |
+
In the theory of dynamical systems, the shadowing lemma is a lemma describing the behaviour of pseudo-orbits near a hyperbolic invariant set. Informally, the theory states that every pseudo-orbit (which one can think of as a numerically computed trajectory with rounding errors on every step) stays uniformly close to some true trajectory (with slightly altered initial position)—in other words, a pseudo-trajectory is "shadowed" by a true one.
|
4 |
+
|
5 |
+
Given a map f : X → X of a metric space (X, d) to itself, define a ε-pseudo-orbit (or ε-orbit) as a sequence $(x_n)$ of points such that $x_{n+1}$ belongs to a ε-neighborhood of $f(x_n)$.
|
6 |
+
|
7 |
+
Then, near a hyperbolic invariant set, the following statement holds:
|
8 |
+
|
9 |
+
Let Λ be a hyperbolic invariant set of a diffeomorphism f. There exists a neighborhood U of Λ with the following property: for any δ > 0 there exists ε > 0, such that any (finite or infinite) ε-pseudo-orbit that stays in U also stays in a δ-neighborhood of some true orbit.
|
10 |
+
|
11 |
+
<math>
|
12 |
+
|
13 |
+
\forall (x_n), x_n\in U, d(x_{n+1},f(x_n))<\varepsilon \quad \exists (y_n), y_{n+1}=f(y_n),\quad \text{such that} \forall n x_n\in U_{\delta}(y_n).
|
14 |
+
|
15 |
+
</math>
|
wiki/wikipedia/1002.txt
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the Lebedev–Milin inequality is any of several inequalities for the coefficients of the exponential of a power series, found by and . It was used in the proof of the Bieberbach conjecture, as it shows that the Milin conjecture implies the Robertson conjecture.
|
2 |
+
|
3 |
+
They state that if
|
4 |
+
$$
|
5 |
+
\sum_{k\ge 0} \beta_kz^k = \exp\left(\sum_{k\ge 1} \alpha_kz^k\right)
|
6 |
+
$$
|
7 |
+
|
8 |
+
for complex numbers β<sub>k</sub> and α<sub>k</sub>, and n is a positive integer, then
|
9 |
+
|
10 |
+
<math>\sum_{k=0}^{\infty}|\beta_k|^2 \le
|
11 |
+
|
12 |
+
\exp\left(\sum_{k=1}^\infty k|\alpha_k|^2\right),</math>
|
13 |
+
|
14 |
+
<math>\sum_{k=0}^{n}|\beta_k|^2 \le
|
15 |
+
|
16 |
+
(n+1)\exp\left(\frac{1}{n+1}\sum_{m=1}^{n}\sum_{k=1}^m(k|\alpha_k|^2 -1/k)\right),</math>
|
17 |
+
|
18 |
+
<math>|\beta_n|^2 \le
|
19 |
+
|
20 |
+
\exp\left(\sum_{k=1}^n(k|\alpha_k|^2 -1/k)\right).</math>
|
21 |
+
|
22 |
+
See also exponential formula (on exponentiation of power series).
|
wiki/wikipedia/1003.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Advanced Synchronization Facility (ASF) is a proposed extension to the x86-64 instruction set architecture that adds hardware transactional memory support. It was introduced by AMD; the latest specification was dated March 2009. , it was still in the proposal stage. No released microprocessors implement the extension.
|
2 |
+
|
3 |
+
ASF provides the capability to start, end and abort transactional execution and to mark CPU cache lines for protected memory access in transactional code regions. It contains four new instructions—<code>SPECULATE</code>, <code>COMMIT</code>, <code>ABORT</code> and <code>RELEASE</code>—and turns the otherwise invalid <code>LOCK</code>-prefixed <code>MOVx</code>, <code>PREFETCH</code> and <code>PREFETCHW</code> instructions into valid ones inside transactional code regions. Up to 256 levels of nested transactional code regions is supported.
|
4 |
+
|
5 |
+
The <code>SPECULATE</code> and <code>COMMIT</code> instructions mark the start and end of a transactional code region. Inside transactional code regions, the <code>LOCK</code>-prefixed <code>MOVx reg/xmm, mem</code>, <code>PREFETCH</code> and <code>PREFETCHW</code> instructions can mark up to four cache lines for protected memory access. Accesses from other processor cores to the protected cache lines result in exceptions, which in turn cause transaction aborts. Stores to protected cache lines must be performed using the <code>LOCK MOVx mem, reg/imm/xmm</code> instructions. Marked cache lines can be released from protection with the <code>RELEASE</code> instruction. Transaction aborts generated by hardware or explicitly requested through the <code>ABORT</code> instruction rolls back modifications to the protected cache lines and restarts execution from the instruction following the top-level <code>SPECULATE</code> instruction.
|
wiki/wikipedia/1004.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the Ahlfors conjecture, now a theorem, states that the limit set of a finitely-generated Kleinian group is either the whole Riemann sphere, or has measure 0.
|
2 |
+
|
3 |
+
The conjecture was introduced by , who proved it in the case that the Kleinian group has a fundamental domain with a finite number of sides. Canary proved the Ahlfors conjecture for topologically tame groups, by showing that a topologically tame Kleinian group is geometrically tame, so the Ahlfors conjecture follows from Marden's tameness conjecture that hyperbolic 3-manifolds with finitely generated fundamental groups are topologically tame (homeomorphic to the interior of compact 3-manifolds). This latter conjecture was proved, independently, by Agol and by Calegari.
|
4 |
+
|
5 |
+
Canary also showed that in the case when the limit set is the whole sphere, the action of the Kleinian group on the limit set is ergodic.
|
wiki/wikipedia/1005.txt
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, in the field of algebraic number theory, a Bauerian extension is a field extension of an algebraic number field which is characterized by the prime ideals with inertial degree one in the extension.
|
2 |
+
|
3 |
+
For a finite degree extension L/K of an algebraic number field K we define P(L/K) to be the set of primes p of K which have a factor P with inertial degree one (that is, the residue field of P has the same order as the residue field of p).
|
4 |
+
|
5 |
+
Bauer's theorem states that if M/K is a finite degree Galois extension, then P(M/K) ⊇ P(L/K) if and only if M ⊆ L. In particular, finite degree Galois extensions N of K are characterised by set of prime ideals which split completely in N.
|
6 |
+
|
7 |
+
An extension F/K is Bauerian if it obeys Bauer's theorem: that is, for every finite extension L of K, we have P(F/K) ⊇ P(L/K) if and only if L contains a subfield K-isomorphic to F.
|
8 |
+
|
9 |
+
All field extensions of degree at most 4 over Q are Bauerian.
|
10 |
+
|
11 |
+
An example of a non-Bauerian extension is the Galois extension of Q by the roots of 2x<sup>5</sup> − 32x + 1, which has Galois group S<sub>5</sub>.
|
wiki/wikipedia/1006.txt
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, Stone's theorem on one-parameter unitary groups is a basic theorem of functional analysis that establishes a one-to-one correspondence between self-adjoint operators on a Hilbert space $\mathcal{H}$ and one-parameter families
|
2 |
+
$$
|
3 |
+
(U_{t})_{t \in \R}
|
4 |
+
$$
|
5 |
+
|
6 |
+
of unitary operators that are strongly continuous, i.e.,
|
7 |
+
$$
|
8 |
+
\forall t_0 \in \R, \psi \in \mathcal{H}: \qquad \lim_{t \to t_0} U_t(\psi) = U_{t_0}(\psi),
|
9 |
+
$$
|
10 |
+
|
11 |
+
and are homomorphisms, i.e.,
|
12 |
+
$$
|
13 |
+
\forall s,t \in \R : \qquad U_{t + s} = U_t U_s.
|
14 |
+
$$
|
15 |
+
|
16 |
+
Such one-parameter families are ordinarily referred to as strongly continuous one-parameter unitary groups.
|
17 |
+
|
18 |
+
The theorem was proved by , and Neumann showed that the requirement that $(U_t)_{t \in \R}$ be strongly continuous can be relaxed to say that it is merely weakly measurable, at least when the Hilbert space is separable.
|
19 |
+
|
20 |
+
This is an impressive result, as it allows to define the derivative of the mapping $t \mapsto U_t,$ which is only supposed to be continuous. It is also related to the theory of Lie groups and Lie algebras.
|
21 |
+
|
22 |
+
The statement of the theorem is as follows.
|
23 |
+
|
24 |
+
Theorem. Let $(U_t)_{t \in \R}$ be a strongly continuous one-parameter unitary group. Then there exists a unique (possibly unbounded) operator $A: \mathcal{D}_A \to \mathcal{H}$, that is self-adjoint on $\mathcal{D}_A$ and such that
|
25 |
+
$$
|
26 |
+
\forall t \in \R : \qquad U_t = e^{itA}.
|
27 |
+
$$
|
28 |
+
|
29 |
+
The domain of $A$ is defined by
|
30 |
+
$$
|
31 |
+
\mathcal{D}_A = \left \{ \psi \in \mathcal{H} \left | \lim_{\varepsilon \to 0} \frac{-i}{\varepsilon} \left(U_{\varepsilon} (\psi) - \psi \right) \text{ exists} \right. \right \}.
|
32 |
+
$$
|
33 |
+
|
34 |
+
Conversely, let $A: \mathcal{D}_A \to \mathcal{H}$ be a (possibly unbounded) self-adjoint operator on $\mathcal{D}_A \subseteq \mathcal{H}.$ Then the one-parameter family $(U_{t})_{t \in \R}$ of unitary operators defined by
|
35 |
+
$$
|
36 |
+
\forall t \in \R : \qquad U_{t} := e^{itA}
|
37 |
+
$$
|
38 |
+
|
39 |
+
is a strongly continuous one-parameter group.
|
40 |
+
|
41 |
+
In both parts of the theorem, the expression $e^{itA}$ is defined by means of the spectral theorem for unbounded self-adjoint operators.
|
42 |
+
|
43 |
+
The operator $A$ is called the infinitesimal generator of $(U_{t})_{t \in \R}.$ Furthermore, $A$ will be a bounded operator if and only if the operator-valued mapping $t \mapsto U_{t}$ is norm-continuous.
|
44 |
+
|
45 |
+
The infinitesimal generator $A$ of a strongly continuous unitary group $(U_{t})_{t \in \R}$ may be computed as
|
46 |
+
$$
|
47 |
+
A\psi = -i\lim_{\varepsilon\to 0}\frac{U_\varepsilon\psi-\psi}{\varepsilon},
|
48 |
+
$$
|
49 |
+
|
50 |
+
with the domain of $A$ consisting of those vectors $\psi$ for which the limit exists in the norm topology. That is to say, $A$ is equal to $-i$ times the derivative of $U_t$ with respect to $t$ at $t=0$. Part of the statement of the theorem is that this derivative exists—i.e., that $A$ is a densely defined self-adjoint operator. The result is not obvious even in the finite-dimensional case, since $U_t$ is only assumed (ahead of time) to be continuous, and not differentiable.
|
51 |
+
|
52 |
+
The family of translation operators
|
53 |
+
$$
|
54 |
+
\left[ T_t(\psi) \right](x) = \psi(x + t)
|
55 |
+
$$
|
56 |
+
|
57 |
+
is a one-parameter unitary group of unitary operators; the infinitesimal generator of this family is an extension of the differential operator
|
58 |
+
$$
|
59 |
+
-i \frac{d}{dx}
|
60 |
+
$$
|
61 |
+
|
62 |
+
defined on the space of continuously differentiable complex-valued functions with compact support on $\R.$ Thus
|
63 |
+
$$
|
64 |
+
T_{t} = e^{t \frac{d}{dx}}.
|
65 |
+
$$
|
66 |
+
|
67 |
+
In other words, motion on the line is generated by the momentum operator.
|
68 |
+
|
69 |
+
Stone's theorem has numerous applications in quantum mechanics. For instance, given an isolated quantum mechanical system, with Hilbert space of states H, time evolution is a strongly continuous one-parameter unitary group on $\mathcal{H}$. The infinitesimal generator of this group is the system Hamiltonian.
|
70 |
+
|
71 |
+
Stone's Theorem can be recast using the language of the Fourier transform. The real line $\R$ is a locally compact abelian group. Non-degenerate *-representations of the group C*-algebra $C^*(\R)$ are in one-to-one correspondence with strongly continuous unitary representations of $\R,$ i.e., strongly continuous one-parameter unitary groups. On the other hand, the Fourier transform is a *-isomorphism from $C^*(\R)$ to $C_0(\R),$ the $C^*$-algebra of continuous complex-valued functions on the real line that vanish at infinity. Hence, there is a one-to-one correspondence between strongly continuous one-parameter unitary groups and *-representations of $C_0(\R).$ As every *-representation of $C_0(\R)$ corresponds uniquely to a self-adjoint operator, Stone's Theorem holds.
|
72 |
+
|
73 |
+
Therefore, the procedure for obtaining the infinitesimal generator of a strongly continuous one-parameter unitary group is as follows:
|
74 |
+
|
75 |
+
* Let $(U_{t})_{t \in \R}$ be a strongly continuous unitary representation of $\R$ on a Hilbert space $\mathcal{H}$.
|
76 |
+
|
77 |
+
* Integrate this unitary representation to yield a non-degenerate *-representation $\rho$ of $C^*(\R)$ on $\mathcal{H}$ by first defining
|
78 |
+
$$
|
79 |
+
\forall f \in C_c(\R): \qquad \rho(f) := \int_{\R} f(t) ~ U_{t} dt,
|
80 |
+
$$
|
81 |
+
|
82 |
+
and then extending $\rho$ to all of $C^*(\R)$ by continuity.
|
83 |
+
|
84 |
+
* Use the Fourier transform to obtain a non-degenerate *-representation $\tau$ of $C_0(\R )$ on $\mathcal{H}$.
|
85 |
+
|
86 |
+
* By the Riesz-Markov Theorem, $\tau$ gives rise to a projection-valued measure on $\R$ that is the resolution of the identity of a unique self-adjoint operator $A$, which may be unbounded.
|
87 |
+
|
88 |
+
* Then $A$ is the infinitesimal generator of $(U_{t})_{t \in \R }.$
|
89 |
+
|
90 |
+
The precise definition of $C^*(\R)$ is as follows. Consider the *-algebra $C_c(\R),$ the continuous complex-valued functions on $\R$ with compact support, where the multiplication is given by convolution. The completion of this *-algebra with respect to the $L^1$-norm is a Banach *-algebra, denoted by $(L^1(\R),\star).$ Then $C^*(\R)$ is defined to be the enveloping $C^*$-algebra of $(L^1(\R),\star)$, i.e., its completion with respect to the largest possible $C^*$-norm. It is a non-trivial fact that, via the Fourier transform, $C^*(\R)$ is isomorphic to $C_0(\R).$ A result in this direction is the Riemann-Lebesgue Lemma, which says that the Fourier transform maps $L^1(\R)$ to $C_0(\R).$
|
91 |
+
|
92 |
+
The Stone–von Neumann theorem generalizes Stone's theorem to a pair of self-adjoint operators, $(P,Q)$, satisfying the canonical commutation relation, and shows that these are all unitarily equivalent to the position operator and momentum operator on $L^2(\R).$
|
93 |
+
|
94 |
+
The Hille–Yosida theorem generalizes Stone's theorem to strongly continuous one-parameter semigroups of contractions on Banach spaces.
|
wiki/wikipedia/1007.txt
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the Schwartz kernel theorem is a foundational result in the theory of generalized functions, published by Laurent Schwartz in 1952. It states, in broad terms, that the generalized functions introduced by Schwartz (Schwartz distributions) have a two-variable theory that includes all reasonable bilinear forms on the space $\mathcal{D}$ of test functions. The space $\mathcal{D}$ itself consists of smooth functions of compact support.
|
2 |
+
|
3 |
+
Let $X$ and $Y$ be open sets in $\mathbb{R}^n$.
|
4 |
+
|
5 |
+
Every distribution $k \in \mathcal{D}'(X \times Y)$ defines a
|
6 |
+
|
7 |
+
continuous linear map $K \colon \mathcal{D}(Y) \to \mathcal{D}'(X)$ such that
|
8 |
+
|
9 |
+
for every $u \in \mathcal{D}(X), v \in \mathcal{D}(Y)$.
|
10 |
+
|
11 |
+
Conversely, for every such continuous linear map $K$
|
12 |
+
|
13 |
+
there exists one and only one distribution $k \in \mathcal{D}'(X \times Y)$ such that () holds.
|
14 |
+
|
15 |
+
The distribution $k$ is the kernel of the map $K$.
|
16 |
+
|
17 |
+
Given a distribution $k \in \mathcal{D}'(X \times Y)$ one can always write the linear map K informally as
|
18 |
+
$$
|
19 |
+
Kv = \int_{Y} k(\cdot,y) v(y) d y
|
20 |
+
$$
|
21 |
+
|
22 |
+
so that
|
23 |
+
$$
|
24 |
+
\langle Kv,u \rangle = \int_{X} \int_{Y} k(x,y) v(y) u(x) d y d x
|
25 |
+
$$.
|
26 |
+
|
27 |
+
The traditional kernel functions $K(x,y)$ of two variables of the theory of integral operators having been expanded in scope to include their generalized function analogues, which are allowed to be more singular in a serious way, a large class of operators from $\mathcal{D}$ to its dual space $\mathcal{D}'$ of distributions can be constructed. The point of the theorem is to assert that the extended class of operators can be characterised abstractly, as containing all operators subject to a minimum continuity condition. A bilinear form on $\mathcal{D}$ arises by pairing the image distribution with a test function.
|
28 |
+
|
29 |
+
A simple example is that the natural embedding of the test function space $\mathcal{D}$ into $\mathcal{D}'$ - sending every test function $f$ into the corresponding distribution $[f]$ - corresponds to the delta distribution
|
30 |
+
$$
|
31 |
+
\delta(x-y)
|
32 |
+
$$
|
33 |
+
|
34 |
+
concentrated at the diagonal of the underlined Euclidean space, in terms of the Dirac delta function $\delta$. While this is at most an observation, it shows how the distribution theory adds to the scope. Integral operators are not so 'singular'; another way to put it is that for $K$ a continuous kernel, only compact operators are created on a space such as the continuous functions on $[0,1]$. The operator $I$ is far from compact, and its kernel is intuitively speaking approximated by functions on $[0,1]\times[0,1]$ with a spike along the diagonal $x=y$ and vanishing elsewhere.
|
35 |
+
|
36 |
+
This result implies that the formation of distributions has a major property of 'closure' within the traditional domain of functional analysis. It was interpreted (comment of Jean Dieudonné) as a strong verification of the suitability of the Schwartz theory of distributions to mathematical analysis more widely seen. In his Éléments d'analyse volume 7, p. 3 he notes that the theorem includes differential operators on the same footing as integral operators, and concludes that it is perhaps the most important modern result of functional analysis. He goes on immediately to qualify that statement, saying that the setting is too 'vast' for differential operators, because of the property of monotonicity with respect to the support of a function, which is evident for differentiation. Even monotonicity with respect to singular support is not characteristic of the general case; its consideration leads in the direction of the contemporary theory of pseudo-differential operators.
|
37 |
+
|
38 |
+
Dieudonné proves a version of the Schwartz result valid for smooth manifolds, and additional supporting results, in sections 23.9 to 23.12 of that book.
|
39 |
+
|
40 |
+
Much of the theory of nuclear spaces was developed by Alexander Grothendieck while investigating the Schwartz kernel theorem and published in . We have the following generalization of the theorem.
|
41 |
+
|
42 |
+
Schwartz kernel theorem: Suppose that X is nuclear, Y is locally convex, and v is a continuous bilinear form on $X \times Y$. Then v originates from a space of the form $X^{\prime}_{A^{\prime}} \widehat{\otimes}_{\epsilon} Y^{\prime}_{B^{\prime}}$ where $A^{\prime}$ and $B^{\prime}$ are suitable equicontinuous subsets of $X^{\prime}$ and $Y^{\prime}$. Equivalently, v is of the form,
|
43 |
+
$$
|
44 |
+
v(x, y) = \sum_{i=1}^{\infty} \lambda_i \left\langle x, x_i^{\prime} \right\rangle \left\langle y, y_i^{\prime} \right\rangle
|
45 |
+
$$ for all $(x, y) \in X \times Y$
|
46 |
+
|
47 |
+
where $\left( \lambda_i \right) \in l^1$ and each of $\{ x^{\prime}_1, x^{\prime}_2, \ldots \}$ and $\{ y^{\prime}_1, y^{\prime}_2, \ldots \}$ are equicontinuous. Furthermore, these sequences can be taken to be null sequences (i.e. converging to 0) in $X^{\prime}_{A^{\prime}}$ and $Y^{\prime}_{B^{\prime}}$, respectively.
|
wiki/wikipedia/1008.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Hierarchical clustering is one method for finding community structures in a network. The technique arranges the network into a hierarchy of groups according to a specified weight function. The data can then be represented in a tree structure known as a dendrogram. Hierarchical clustering can either be agglomerative or divisive depending on whether one proceeds through the algorithm by adding links to or removing links from the network, respectively. One divisive technique is the Girvan–Newman algorithm.
|
2 |
+
|
3 |
+
<!-- Image with unknown copyright status removed: thumb|right|Fig. 1: Example of a dendrogram constructed using a hierarchical clustering algorithm.
|
4 |
+
|
5 |
+
Edge betweenness centrality has been used successfully as a weight in the Girvan–Newman algorithm. This method provides a computationally less-costly alternative to the Girvan-Newman algorithm while yielding similar results.
|
wiki/wikipedia/1009.txt
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In databases, and transaction processing (transaction management), snapshot isolation is a guarantee that all reads made in a transaction will see a consistent snapshot of the database (in practice it reads the last committed values that existed at the time it started), and the transaction itself will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot.
|
2 |
+
|
3 |
+
Snapshot isolation has been adopted by several major database management systems, such as InterBase, Firebird, Oracle, MySQL, PostgreSQL, SQL Anywhere, MongoDB and Microsoft SQL Server (2005 and later). The main reason for its adoption is that it allows better performance than serializability, yet still avoids most of the concurrency anomalies that serializability avoids (but not always all). In practice snapshot isolation is implemented within multiversion concurrency control (MVCC), where generational values of each data item (versions) are maintained: MVCC is a common way to increase concurrency and performance by generating a new version of a database object each time the object is written, and allowing transactions' read operations of several last relevant versions (of each object). Snapshot isolation has been used to criticize the ANSI SQL-92 standard's definition of isolation levels, as it exhibits none of the "anomalies" that the SQL standard prohibited, yet is not serializable (the anomaly-free isolation level defined by ANSI).
|
4 |
+
|
5 |
+
In spite of its distinction from serializability, snapshot isolation is sometimes referred to as serializable by Oracle.
|
6 |
+
|
7 |
+
A transaction executing under snapshot isolation appears to operate on a personal snapshot of the database, taken at the start of the transaction. When the transaction concludes, it will successfully commit only if the values updated by the transaction have not been changed externally since the snapshot was taken. Such a write–write conflict will cause the transaction to abort.
|
8 |
+
|
9 |
+
In a write skew anomaly, two transactions (T1 and T2) concurrently read an overlapping data set (e.g. values V1 and V2), concurrently make disjoint updates (e.g. T1 updates V1, T2 updates V2), and finally concurrently commit, neither having seen the update performed by the other. Were the system serializable, such an anomaly would be impossible, as either T1 or T2 would have to occur "first", and be visible to the other. In contrast, snapshot isolation permits write skew anomalies.
|
10 |
+
|
11 |
+
As a concrete example, imagine V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 ≥ 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2.
|
12 |
+
|
13 |
+
If the database guaranteed serializable transactions, the simplest way of coding T1 is to deduct $200 from V1, and then verify that V1 + V2 ≥ 0 still holds, aborting if not. T2 similarly deducts $200 from V2 and then verifies V1 + V2 ≥ 0. Since the transactions must serialize, either T1 happens first, leaving V1 = −$100, V2 = $100, and preventing T2 from succeeding (since V1 + (V2 − $200) is now −$200), or T2 happens first and similarly prevents T1 from committing.
|
14 |
+
|
15 |
+
If the database is under snapshot isolation(MVCC), however, T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neither update conflicts, both commit successfully, leaving V1 = V2 = −$100, and V1 + V2 = −$200.
|
16 |
+
|
17 |
+
Some systems built using multiversion concurrency control (MVCC) may support (only) snapshot isolation to allow transactions to proceed without worrying about concurrent operations, and more importantly without needing to re-verify all read operations when the transaction finally commits. This is convenient because MVCC maintains a series of recent history consistent states. The only information that must be stored during the transaction is a list of updates made, which can be scanned for conflicts fairly easily before being committed. However MVCC systems (such as MarkLogic) will use locks to serialize writes together with MVCC to obtain some of the performance gains and still support the stronger "serializability" level of isolation.
|
18 |
+
|
19 |
+
Potential inconsistency problems arising from write skew anomalies can be fixed by adding (otherwise unnecessary) updates to the transactions in order to enforce the serializability property.
|
20 |
+
|
21 |
+
;Materialize the conflict: Add a special conflict table, which both transactions update in order to create a direct write–write conflict.
|
22 |
+
|
23 |
+
;Promotion: Have one transaction "update" a read-only location (replacing a value with the same value) in order to create a direct write–write conflict (or use an equivalent promotion, e.g. Oracle's SELECT FOR UPDATE).
|
24 |
+
|
25 |
+
In the example above, we can materialize the conflict by adding a new table which makes the hidden constraint explicit, mapping each person to their total balance. Phil would start off with a total balance of $200, and each transaction would attempt to subtract $200 from this, creating a write–write conflict that would prevent the two from succeeding concurrently. However, this approach violates the normal form.
|
26 |
+
|
27 |
+
Alternatively, we can promote one of the transaction's reads to a write. For instance, T2 could set V1 = V1, creating an artificial write–write conflict with T1 and, again, preventing the two from succeeding concurrently. This solution may not always be possible.
|
28 |
+
|
29 |
+
In general, therefore, snapshot isolation puts some of the problem of maintaining non-trivial constraints onto the user, who may not appreciate either the potential pitfalls or the possible solutions. The upside to this transfer is better performance.
|
30 |
+
|
31 |
+
Snapshot isolation is called "serializable" mode in Oracle and PostgreSQL versions prior to 9.1, which may cause confusion with the "real serializability" mode. There are arguments both for and against this decision; what is clear is that users must be aware of the distinction to avoid possible undesired anomalous behavior in their database system logic.
|
32 |
+
|
33 |
+
Snapshot isolation arose from work on multiversion concurrency control databases, where multiple versions of the database are maintained concurrently to allow readers to execute without colliding with writers. Such a system allows a natural definition and implementation of such an isolation level. This implementation of serializability is well-suited to multiversion concurrency control databases, and has been adopted in PostgreSQL 9.1,
|
34 |
+
|
35 |
+
where it is referred to as "Serializable Snapshot Isolation", abbreviated to SSI. When used consistently, this eliminates the need for the above workarounds. The downside over snapshot isolation is an increase in aborted transactions. This can perform better or worse than snapshot isolation with the above workarounds, depending on workload.
|
36 |
+
|
37 |
+
In 2011, Jimenez-Peris et al. filed a patent where it was shown how it was possible to scale to many millions of update transactions per second with a new method for attaining snapshot isolation in a distributed manner. The method is based on the observation that it becomes possible to commit transactions fully in parallel without any coordination and therefore removing the bottleneck of traditional transactional processing methods. The method uses a commit sequencer that generates commit timestamps and a snapshot server that advances the current snapshot as gaps are filled in the serialization order. This method is the base of the database LeanXcale. The first implementation of this method was made in 2010 as part of the CumuloNimbo European Project.
|
wiki/wikipedia/101.txt
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The Asian Sudoku Championship (ASC) is an annual international sudoku competition organised by a member of the World Puzzle Federation (WPF). The first official event was held in Jeju Island, South Korea in 2018 and the latest event was held in Hyderabad, India in 2020. National teams are determined by local affiliates of the WPF. The competition typically consists of several classic sudokus and variations to be solved by all competitors over multiple timed rounds.
|
2 |
+
|
3 |
+
In the individual championship, Seungjae Kwak of South Korea, Kota Morinishi of Japan and Sun Cheran of China, each has won one title.
|
4 |
+
|
5 |
+
In the team championship, India has won twice and Japan has won once.
|
6 |
+
|
7 |
+
Most of the champions have won with a considerable lead but the lower ranks have been closely contested.
|
wiki/wikipedia/1010.txt
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
GrGen.NET is a software development tool that offers programming languages (domain-specific languages) that are optimized for the processing of graph structured data.
|
2 |
+
|
3 |
+
The core of the languages consists of modular graph rewrite rules, which are built on declarative graph pattern matching and rewriting; they are supplemented by many of the constructs that are used in imperative and object-oriented programming,
|
4 |
+
|
5 |
+
and are completed with language devices known from database query languages.
|
6 |
+
|
7 |
+
The Graph Rewrite GENerator compiles the languages into efficient CLI assemblies (via C#-Code in an intermediate step), which can be integrated via an API into code written in any .NET-language.
|
8 |
+
|
9 |
+
GrGen can be executed under Windows and Linux (Mono needed) and is open source available under LGPL v3.
|
10 |
+
|
11 |
+
For rapid prototyping and debugging, an interactive shell and a (VCG-)graph viewer are included in the package.
|
12 |
+
|
13 |
+
With its languages and its visual and stepwise debugging, GrGen allows one to develop at the natural level of abstraction of graph-based representations, such as those employed in engineering, model transformation, computational linguistics, or compiler construction (as intermediate representation).
|
14 |
+
|
15 |
+
GrGen increases productivity for those kinds of tasks far beyond what can be achieved by programming in a traditional programming language; due to many implemented performance optimizations it still allows one to achieve high-performance solutions.
|
16 |
+
|
17 |
+
Its authors claim that the system offers the highest combined speed of development and execution available for the algorithmic processing of graph-based representations (based on their performance regarding diverse tasks posed at different editions of the Transformation Tool Contest (/GraBaTs)).
|
18 |
+
|
19 |
+
Below is an example containing a graph model and rule specifications from the GrGen.NET-solution to the posed at .
|
20 |
+
|
21 |
+
Graph model:
|
22 |
+
|
23 |
+
node class GridNode {
|
24 |
+
|
25 |
+
food:int;
|
26 |
+
|
27 |
+
pheromones:int;
|
28 |
+
|
29 |
+
}
|
30 |
+
|
31 |
+
node class GridCornerNode extends GridNode;
|
32 |
+
|
33 |
+
node class AntHill extends GridNode {
|
34 |
+
|
35 |
+
foodCountdown:int = 10;
|
36 |
+
|
37 |
+
}
|
38 |
+
|
39 |
+
node class Ant {
|
40 |
+
|
41 |
+
hasFood:boolean;
|
42 |
+
|
43 |
+
}
|
44 |
+
|
45 |
+
edge class GridEdge connect GridNode[1] -> GridNode[1];
|
46 |
+
|
47 |
+
edge class PathToHill extends GridEdge;
|
48 |
+
|
49 |
+
edge class AntPosition;
|
50 |
+
|
51 |
+
Rewrite rules:
|
52 |
+
|
53 |
+
rule TakeFood(curAnt:Ant)
|
54 |
+
|
55 |
+
{
|
56 |
+
|
57 |
+
curAnt -:AntPosition-> n:GridNode\AntHill;
|
58 |
+
|
59 |
+
if { !curAnt.hasFood && n.food > 0; }
|
60 |
+
|
61 |
+
modify {
|
62 |
+
|
63 |
+
eval {
|
64 |
+
|
65 |
+
curAnt.hasFood = true;
|
66 |
+
|
67 |
+
n.food = n.food - 1;
|
68 |
+
|
69 |
+
}
|
70 |
+
|
71 |
+
}
|
72 |
+
|
73 |
+
}
|
74 |
+
|
75 |
+
rule SearchAlongPheromones(curAnt:Ant)
|
76 |
+
|
77 |
+
{
|
78 |
+
|
79 |
+
curAnt -oldPos:AntPosition-> old:GridNode <-:PathToHill- new:GridNode;
|
80 |
+
|
81 |
+
if { new.pheromones > 9; }
|
82 |
+
|
83 |
+
modify {
|
84 |
+
|
85 |
+
delete(oldPos);
|
86 |
+
|
87 |
+
curAnt -:AntPosition-> new;
|
88 |
+
|
89 |
+
}
|
90 |
+
|
91 |
+
}
|
92 |
+
|
93 |
+
test ReachedEndOfWorld(curAnt:Ant) : (GridNode)
|
94 |
+
|
95 |
+
{
|
96 |
+
|
97 |
+
curAnt -:AntPosition-> n:GridNode\AntHill;
|
98 |
+
|
99 |
+
negative {
|
100 |
+
|
101 |
+
n <-:PathToHill-;
|
102 |
+
|
103 |
+
}
|
104 |
+
|
105 |
+
return (n);
|
106 |
+
|
107 |
+
}
|
wiki/wikipedia/1011.txt
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In computer science, load-linked/store-conditional (LL/SC), sometimes known as load-reserved/store-conditional (LR/SC), are a pair of instructions used in multithreading to achieve synchronization. Load-link returns the current value of a memory location, while a subsequent store-conditional to the same memory location will store a new value only if no updates have occurred to that location since the load-link. Together, this implements a lock-free atomic read-modify-write operation.
|
2 |
+
|
3 |
+
"Load-linked" is also known as load-link, load-reserved, and load-locked.
|
4 |
+
|
5 |
+
LL/SC was originally proposed by Jensen, Hagensen, and Broughton for the S-1 AAP multiprocessor at Lawrence Livermore National Laboratory.
|
6 |
+
|
7 |
+
If any updates have occurred, the store-conditional is guaranteed to fail, even if the value read by the load-link has since been restored. As such, an LL/SC pair is stronger than a read followed by a compare-and-swap (CAS), which will not detect updates if the old value has been restored (see ABA problem).
|
8 |
+
|
9 |
+
Real implementations of LL/SC do not always succeed even if there are no concurrent updates to the memory location in question. Any exceptional events between the two operations, such as a context switch, another load-link, or even (on many platforms) another load or store operation, will cause the store-conditional to spuriously fail. Older implementations will fail if there are any updates broadcast over the memory bus. This is called weak LL/SC by researchers, as it breaks many theoretical LL/SC algorithms. Weakness is relative, and some weak implementations can be used for some algorithms.
|
10 |
+
|
11 |
+
LL/SC is more difficult to emulate than CAS. Additionally, stopping running code between paired LL/SC instructions, such as when single-stepping through code, can prevent forward progress, making debugging tricky.
|
12 |
+
|
13 |
+
Nevertheless, LL/SC is equivalent to CAS in the sense that either primitive can be implemented in terms of the other, in O(1) and in a wait-free manner.
|
14 |
+
|
15 |
+
LL/SC instructions are supported by:
|
16 |
+
|
17 |
+
* Alpha: ldl_l/stl_c and ldq_l/stq_c
|
18 |
+
|
19 |
+
* PowerPC/Power ISA: lwarx/stwcx and ldarx/stdcx
|
20 |
+
|
21 |
+
* MIPS: ll/sc
|
22 |
+
|
23 |
+
* ARM: ldrex/strex (ARMv6 and v7), and ldxr/stxr (ARM version 8)
|
24 |
+
|
25 |
+
* RISC-V: lr/sc
|
26 |
+
|
27 |
+
* ARC: LLOCK/SCOND
|
28 |
+
|
29 |
+
Some CPUs require the address being accessed exclusively to be configured in write-through mode.
|
30 |
+
|
31 |
+
Typically, CPUs track the load-linked address at a cache-line or other granularity, such that any modification to any portion of the cache line (whether via another core's store-conditional or merely by an ordinary store) is sufficient to cause the store-conditional to fail.
|
32 |
+
|
33 |
+
All of these platforms provide weak LL/SC. The PowerPC implementation allows an LL/SC pair to wrap loads and even stores to other cache lines (although this approach is vulnerable to false cache line sharing). This allows it to implement, for example, lock-free reference counting in the face of changing object graphs with arbitrary counter reuse (which otherwise requires double compare-and-swap, DCAS). RISC-V provides an architectural guarantee of eventual progress for LL/SC sequences of limited length.
|
34 |
+
|
35 |
+
Some ARM implementations define platform dependent blocks, ranging from 8 bytes to 2048 bytes, and an LL/SC attempt in any given block fails if there is between the LL and SC a normal memory access inside the same block. Other ARM implementations fail if there is a modification anywhere in the whole address space. The former implementation is the stronger and most practical.
|
36 |
+
|
37 |
+
LL/SC has two advantages over CAS when designing a load–store architecture: reads and writes are separate instructions, as required by the design philosophy (and pipeline architecture); and both instructions can be performed using only two registers (address and value), fitting naturally into common 2-operand ISAs. CAS, on the other hand, requires three registers (address, old value, new value) and a dependency between the value read and the value written. x86, being a CISC architecture, does not have this constraint; though modern chips may well translate a CAS instruction into separate LL/SC micro-operations internally.
|
38 |
+
|
39 |
+
Hardware LL/SC implementations typically do not allow nesting of LL/SC pairs. A nesting LL/SC mechanism can be used to provide a MCAS primitive (multi-word CAS, where the words can be scattered). In 2013, Trevor Brown, Faith Ellen, and Eric Ruppert implemented in software a multi-address LL/SC extension (which they call LLX/SCX) that relies on automated code generation; they have used it to implement one of the best-performing concurrent binary search tree (actually a chromatic tree), slightly beating the JDK CAS-based skip list implementation.
|
wiki/wikipedia/1012.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
In theoretical physics, the Vafa–Witten theorem, named after Cumrun Vafa and Edward Witten, is a theorem that shows that vector-like global symmetries (those that transform as expected under reflections) such as isospin and baryon number in vector-like gauge theories like quantum chromodynamics cannot be spontaneously broken as long as the theta angle is zero. This theorem can be proved by showing the exponential fall off of the propagator of fermions.
|
wiki/wikipedia/1013.txt
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In graph theory, the thickness of a graph G is the minimum number of planar graphs into which the edges of G can be partitioned. That is, if there exists a collection of k planar graphs, all having the same set of vertices, such that the union of these planar graphs is G, then the thickness of G is at most k. In other words, the thickness of a graph is the minimum number of planar subgraphs whose union equals to graph G.
|
2 |
+
|
3 |
+
Thus, a planar graph has thickness 1. Graphs of thickness 2 are called biplanar graphs. The concept of thickness originates in the 1962 conjecture of Frank Harary: For any graph on 9 points, either itself or its complementary graph is non-planar. The problem is equivalent to determining whether the complete graph K<sub>9</sub> is biplanar (it is not, and the conjecture is true). A comprehensive survey on the state of the arts of the topic as of 1998 was written by Petra Mutzel, Thomas Odenthal and Mark Scharbrodt.
|
4 |
+
|
5 |
+
The thickness of the complete graph on n vertices, K<sub>n</sub>, is
|
6 |
+
$$
|
7 |
+
\left \lfloor \frac{n+7}{6} \right\rfloor,
|
8 |
+
$$
|
9 |
+
|
10 |
+
except when n = 9, 10 for which the thickness is three.
|
11 |
+
|
12 |
+
With some exceptions, the thickness of a complete bipartite graph K<sub>a,b</sub> is generally:
|
13 |
+
$$
|
14 |
+
\left \lceil \frac{ab}{2(a+b-2)} \right \rceil.
|
15 |
+
$$
|
16 |
+
|
17 |
+
Every forest is planar, and every planar graph can be partitioned into at most three forests. Therefore, the thickness of any graph G is at most equal to the arboricity of the same graph (the minimum number of forests into which it can be partitioned) and at least equal to the arboricity divided by three. The thickness of G is also within constant factors of another standard graph invariant, the degeneracy, defined as the maximum, over subgraphs of G, of the minimum degree within the subgraph. If an n-vertex graph has thickness t then it necessarily has at most t(3n − 6) edges, from which it follows that its degeneracy is at most 6t − 1. In the other direction, if a graph has degeneracy D then it has arboricity, and thickness, at most D.
|
18 |
+
|
19 |
+
Thickness is closely related to the problem of simultaneous embedding. If two or more planar graphs all share the same vertex set, then it is possible to embed all these graphs in the plane, with the edges drawn as curves, so that each vertex has the same position in all the different drawings. However, it may not be possible to construct such a drawing while keeping the edges drawn as straight line segments.
|
20 |
+
|
21 |
+
A different graph invariant, the rectilinear thickness or geometric thickness of a graph G, counts the smallest number of planar graphs into which G can be decomposed subject to the restriction that all of these graphs can be drawn simultaneously with straight edges. The book thickness adds an additional restriction, that all of the vertices be drawn in convex position, forming a circular layout of the graph. However, in contrast to the situation for arboricity and degeneracy, no two of these three thickness parameters are always within a constant factor of each other.
|
22 |
+
|
23 |
+
It is NP-hard to compute the thickness of a given graph, and NP-complete to test whether the thickness is at most two. However, the connection to arboricity allows the thickness to be approximated to within an approximation ratio of 3 in polynomial time.
|
wiki/wikipedia/1014.txt
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the image of a function is the set of all output values it may produce.
|
2 |
+
|
3 |
+
More generally, evaluating a given function $f$ at each element of a given subset $A$ of its domain produces a set, called the "image of $A$ under (or through) $f$". Similarly, the inverse image (or preimage) of a given subset $B$ of the codomain of $f,$ is the set of all elements of the domain that map to the members of $B.$
|
4 |
+
|
5 |
+
Image and inverse image may also be defined for general binary relations, not just functions.
|
6 |
+
|
7 |
+
The word "image" is used in three related ways. In these definitions, $f : X \to Y$ is a function from the set $X$ to the set $Y.$
|
8 |
+
|
9 |
+
If $x$ is a member of $X,$ then the image of $x$ under $f,$ denoted $f(x),$ is the value of $f$ when applied to $x.$ $f(x)$ is alternatively known as the output of $f$ for argument $x.$
|
10 |
+
|
11 |
+
Given $y,$ the function $f$ is said to "take the value $y$" or "take $y$ as a value" if there exists some $x$ in the function's domain such that $f(x) = y.$
|
12 |
+
|
13 |
+
Similarly, given a set $S,$ $f$ is said to "take a value in $S$" if there exists some $x$ in the function's domain such that $f(x) \in S.$
|
14 |
+
|
15 |
+
However, "$f$ takes [all] values in $S$" and "$f$ is valued in $S$" means that $f(x) \in S$ for every point $x$ in $f$'s domain.
|
16 |
+
|
17 |
+
Throughout, let $f : X \to Y$ be a function.
|
18 |
+
|
19 |
+
The image under $f$ of a subset $A$ of $X$ is the set of all $f(a)$ for $a\in A.$ It is denoted by $f[A],$ or by $f(A),$ when there is no risk of confusion. Using set-builder notation, this definition can be written as
|
20 |
+
|
21 |
+
<math display=block>f[A] = \{f(a) : a \in A\}.</math>
|
22 |
+
|
23 |
+
This induces a function $f[\cdot] : \wp(X) \to \wp(Y),$ where $\wp(S)$ denotes the power set of a set $S;$ that is the set of all subsets of $S.$ See below for more.
|
24 |
+
|
25 |
+
The image of a function is the image of its entire domain, also known as the range of the function. This usage should be avoided because the word "range" is also commonly used to mean the codomain of $f.$
|
26 |
+
|
27 |
+
If $R$ is an arbitrary binary relation on $X \times Y,$ then the set $\{ y \in Y : x R y \text{ for some } x \in X \}$ is called the image, or the range, of $R.$ Dually, the set $\{ x \in X : x R y \text{ for some } y \in Y \}$ is called the domain of $R.$
|
28 |
+
|
29 |
+
Let $f$ be a function from $X$ to $Y.$ The preimage or inverse image of a set $B \subseteq Y$ under $f,$ denoted by $f^{-1}[B],$ is the subset of $X$ defined by
|
30 |
+
|
31 |
+
<math display="block">f^{-1}[ B ] = \{ x \in X : f(x) \in B \}.</math>
|
32 |
+
|
33 |
+
Other notations include $f^{-1}(B)$ and $f^{-}(B).$
|
34 |
+
|
35 |
+
The inverse image of a singleton set, denoted by $f^{-1}[\{ y \}]$ or by $f^{-1}[y],$ is also called the fiber or fiber over $y$ or the level set of $y.$ The set of all the fibers over the elements of $Y$ is a family of sets indexed by $Y.$
|
36 |
+
|
37 |
+
For example, for the function $f(x) = x^2,$ the inverse image of $\{ 4 \}$ would be $\{ -2, 2 \}.$ Again, if there is no risk of confusion, $f^{-1}[B]$ can be denoted by $f^{-1}(B),$ and $f^{-1}$ can also be thought of as a function from the power set of $Y$ to the power set of $X.$ The notation $f^{-1}$ should not be confused with that for inverse function, although it coincides with the usual one for bijections in that the inverse image of $B$ under $f$ is the image of $B$ under $f^{-1}.$
|
38 |
+
|
39 |
+
The traditional notations used in the previous section may be confusing, because it does not distinguish the original function $f : X \to Y$ from the image-of-sets function $f : \mathcal{P}(X) \to \mathcal{P}(Y)$; likewise it does not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). An alternative is to give explicit names for the image and preimage as functions between power sets:
|
40 |
+
|
41 |
+
* $f^\rightarrow : \mathcal{P}(X) \to \mathcal{P}(Y)$ with $f^\rightarrow(A) = \{ f(a)| a \in A\}$
|
42 |
+
|
43 |
+
* $f^\leftarrow : \mathcal{P}(Y) \to \mathcal{P}(X)$ with $f^\leftarrow(B) = \{ a \in X | f(a) \in B\}$
|
44 |
+
|
45 |
+
* $f_\star : \mathcal{P}(X) \to \mathcal{P}(Y)$ instead of $f^\rightarrow$
|
46 |
+
|
47 |
+
* $f^\star : \mathcal{P}(Y) \to \mathcal{P}(X)$ instead of $f^\leftarrow$
|
48 |
+
|
49 |
+
* An alternative notation for $f[A]$ used in mathematical logic and set theory is $fA.$
|
50 |
+
|
51 |
+
* Some texts refer to the image of $f$ as the range of $f,$ but this usage should be avoided because the word "range" is also commonly used to mean the codomain of $f.$
|
52 |
+
|
53 |
+
# $f : \{ 1, 2, 3 \} \to \{ a, b, c, d \}$ defined by <math>
|
54 |
+
|
55 |
+
f(x) = \left\{\begin{matrix}
|
56 |
+
|
57 |
+
a, & \mbox{if }x=1 \\
|
58 |
+
|
59 |
+
a, & \mbox{if }x=2 \\
|
60 |
+
|
61 |
+
c, & \mbox{if }x=3.
|
62 |
+
|
63 |
+
\end{matrix}\right.
|
64 |
+
|
65 |
+
</math> The image of the set $\{ 2, 3 \}$ under $f$ is $f(\{ 2, 3 \}) = \{ a, c \}.$ The image of the function $f$ is $\{ a, c \}.$ The preimage of $a$ is $f^{-1}(\{ a \}) = \{ 1, 2 \}.$ The preimage of $\{ a, b \}$ is also $f^{-1}(\{ 1, 2 \}) = \{ 1, 2 \}.$ The preimage of $\{ b, d \},$ is the empty set $\{ \} = \varnothing.$
|
66 |
+
|
67 |
+
# $f : \R \to \R$ defined by $f(x) = x^2.$ The image of $\{ -2, 3 \}$ under $f$ is $f^{-1}(\{ -2, 3 \}) = \{ 4, 9 \},$ and the image of $f$ is $\R^+$ (the set of all positive real numbers and zero). The preimage of $\{ 4, 9 \}$ under $f$ is $f^{-1}(\{ 4, 9 \}) = \{ -3, -2, 2, 3 \}.$ The preimage of set $N = \{ n \in \R : n < 0 \}$ under $f$ is the empty set, because the negative numbers do not have square roots in the set of reals.
|
68 |
+
|
69 |
+
# $f : \R^2 \to \R$ defined by $f(x, y) = x^2 + y^2.$ The fiber $f^{-1}(\{ a \})$ are concentric circles about the origin, the origin itself, and the empty set, depending on whether $a > 0, a = 0, \text{ or } a < 0,$ respectively. (if $a > 0,$ then the fiber $f^{-1}(\{ a \})$ is the set of all $(x, y) \in \R^2$ satisfying the equation of the origin-concentric ring $x^2 + y^2 = a.$)
|
70 |
+
|
71 |
+
# If $M$ is a manifold and $\pi : TM \to M$ is the canonical projection from the tangent bundle $TM$ to $M,$ then the fibers of $\pi$ are the tangent spaces $T_x(M) \text{ for } x \in M.$ This is also an example of a fiber bundle.
|
72 |
+
|
73 |
+
# A quotient group is a homomorphic image.
|
74 |
+
|
75 |
+
For every function $f : X \to Y$ and all subsets $A \subseteq X$ and $B \subseteq Y,$ the following properties hold:
|
76 |
+
|
77 |
+
Also:
|
78 |
+
|
79 |
+
* $f(A) \cap B = \varnothing \text{ if and only if } A \cap f^{-1}(B) = \varnothing$
|
80 |
+
|
81 |
+
For functions $f : X \to Y$ and $g : Y \to Z$ with subsets $A \subseteq X$ and $C \subseteq Z,$ the following properties hold:
|
82 |
+
|
83 |
+
* $(g \circ f)(A) = g(f(A))$
|
84 |
+
|
85 |
+
* $(g \circ f)^{-1}(C) = f^{-1}(g^{-1}(C))$
|
86 |
+
|
87 |
+
For function $f : X \to Y$ and subsets $A, B \subseteq X$ and $S, T \subseteq Y,$ the following properties hold:
|
88 |
+
|
89 |
+
The results relating images and preimages to the (Boolean) algebra of intersection and union work for any collection of subsets, not just for pairs of subsets:
|
90 |
+
|
91 |
+
* $f\left(\bigcup_{s\in S}A_s\right) = \bigcup_{s\in S} f\left(A_s\right)$
|
92 |
+
|
93 |
+
* $f\left(\bigcap_{s\in S}A_s\right) \subseteq \bigcap_{s\in S} f\left(A_s\right)$
|
94 |
+
|
95 |
+
* $f^{-1}\left(\bigcup_{s\in S}B_s\right) = \bigcup_{s\in S} f^{-1}\left(B_s\right)$
|
96 |
+
|
97 |
+
* $f^{-1}\left(\bigcap_{s\in S}B_s\right) = \bigcap_{s\in S} f^{-1}\left(B_s\right)$
|
98 |
+
|
99 |
+
(Here, $S$ can be infinite, even uncountably infinite.)
|
100 |
+
|
101 |
+
With respect to the algebra of subsets described above, the inverse image function is a lattice homomorphism, while the image function is only a semilattice homomorphism (that is, it does not always preserve intersections).
|
wiki/wikipedia/1015.txt
ADDED
@@ -0,0 +1,259 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In logic and computer science, unification is an algorithmic process of solving equations between symbolic expressions.
|
2 |
+
|
3 |
+
Depending on which expressions (also called terms) are allowed to occur in an equation set (also called unification problem), and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representing functions, are allowed in an expression, the process is called higher-order unification, otherwise first-order unification. If a solution is required to make both sides of each equation literally equal, the process is called syntactic or free unification, otherwise semantic or equational unification, or E-unification, or unification modulo theory.
|
4 |
+
|
5 |
+
A solution of a unification problem is denoted as a substitution, that is, a mapping assigning a symbolic value to each variable of the problem's expressions. A unification algorithm should compute for a given problem a complete, and minimal substitution set, that is, a set covering all its solutions, and containing no redundant members. Depending on the framework, a complete and minimal substitution set may have at most one, at most finitely many, or possibly infinitely many members, or may not exist at all. In some frameworks it is generally impossible to decide whether any solution exists. For first-order syntactical unification, Martelli and Montanari gave an algorithm that reports unsolvability or computes a complete and minimal singleton substitution set containing the so-called most general unifier.
|
6 |
+
|
7 |
+
For example, using x,y,z as variables, the singleton equation set { cons(x,cons(x,nil)) = cons(2,y) } is a syntactic first-order unification problem that has the substitution { x ↦ 2, y ↦ cons(2,nil) } as its only solution.
|
8 |
+
|
9 |
+
The syntactic first-order unification problem { y = cons(2,y) } has no solution over the set of finite terms; however, it has the single solution { y ↦ cons(2,cons(2,cons(2,...))) } over the set of infinite trees.
|
10 |
+
|
11 |
+
The semantic first-order unification problem { a⋅x = x⋅a } has each substitution of the form { x ↦ a⋅...⋅a } as a solution in a semigroup, i.e. if (⋅) is considered associative; the same problem, viewed in an abelian group, where (⋅) is considered also commutative, has any substitution at all as a solution.
|
12 |
+
|
13 |
+
The singleton set { a = y(x) } is a syntactic second-order unification problem, since y is a function variable.
|
14 |
+
|
15 |
+
One solution is { x ↦ a, y ↦ (identity function) }; another one is { y ↦ (constant function mapping each value to a), x ↦ (any value) }.
|
16 |
+
|
17 |
+
A unification algorithm was first discovered by Jacques Herbrand, while a first formal investigation can be attributed to John Alan Robinson, who used first-order syntactical unification as a basic building block of his resolution procedure for first-order logic, a great step forward in automated reasoning technology, as it eliminated one source of combinatorial explosion: searching for instantiation of terms. Today, automated reasoning is still the main application area of unification.
|
18 |
+
|
19 |
+
Syntactical first-order unification is used in logic programming and programming language type system implementation, especially in Hindley–Milner based type inference algorithms.
|
20 |
+
|
21 |
+
Semantic unification is used in SMT solvers, term rewriting algorithms and cryptographic protocol analysis.
|
22 |
+
|
23 |
+
Higher-order unification is used in proof assistants, for example Isabelle and Twelf, and restricted forms of higher-order unification (higher-order pattern unification) are used in some programming language implementations, such as lambdaProlog, as higher-order patterns are expressive, yet their associated unification procedure retains theoretical properties closer to first-order unification.
|
24 |
+
|
25 |
+
Formally, a unification approach presupposes
|
26 |
+
|
27 |
+
* An infinite set $V$ of variables. For higher-order unification, it is convenient to choose $V$ disjoint from the set of lambda-term bound variables.
|
28 |
+
|
29 |
+
* A set $T$ of terms such that $V \subseteq T$. For first-order unification, $T$ is usually the set of first-order terms (terms built from variable and function symbols). For higher-order unification $T$ consists of first-order terms and lambda terms (terms containing some higher-order variables).
|
30 |
+
|
31 |
+
* A mapping vars: $T \rightarrow$ $\mathbb{P}$$(V)$, assigning to each term $t$ the set $\text{vars}(t) \subsetneq V$ of free variables occurring in $t$.
|
32 |
+
|
33 |
+
* An equivalence relation $\equiv$ on $T$, indicating which terms are considered equal. For first-order E-unification, $\equiv$ reflects the background knowledge about certain function symbols; for example, if $\oplus$ is considered commutative, $t\equiv u$ if $u$ results from $t$ by swapping the arguments of $\oplus$ at some (possibly all) occurrences. In the most typical case that there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal. In this case, ≡ is called the free theory (because it is a free object), the empty theory (because the set of equational sentences, or the background knowledge, is empty), the theory of uninterpreted functions (because unification is done on uninterpreted terms), or the theory of constructors (because all function symbols just build up data terms, rather than operating on them). For higher-order unification, usually $t\equiv u$ if $t$ and $u$ are alpha equivalent.
|
34 |
+
|
35 |
+
Given a set $V$ of variable symbols, a set $C$ of constant symbols and sets $F_n$ of n-ary function symbols, also called operator symbols, for each natural number $n \geq 1$, the set of (unsorted first-order) terms $T$ is recursively defined to be the smallest set with the following properties:
|
36 |
+
|
37 |
+
* every variable symbol is a term: $V \subseteq T$,
|
38 |
+
|
39 |
+
* every constant symbol is a term: $C \subseteq T$,
|
40 |
+
|
41 |
+
* from every n terms $t_1, ... t_n$, and every n-ary function symbol $f \in F_n$, a larger term $f(t_1, ..., t_n)$ can be built.
|
42 |
+
|
43 |
+
For example, if $x\in V$ is a variable symbol, $1\in C$ is a constant symbol, and $\text{add} \in F_2$ is a binary function symbol, then $x\in T, 1\in T$, and (hence) $\text{add}(x, 1) \in T$ by the first, second, and third term building rule, respectively. The latter term is usually written as $x+1$, using infix notation and the more common operator symbol + for convenience.
|
44 |
+
|
45 |
+
A substitution is a mapping $\sigma: V\rightarrow T$ from variables to terms; the notation $ \{x_1\mapsto t_1, ..., x_k \mapsto t_k\}$ refers to a substitution mapping each variable $x_i$ to the term $t_i$, for $i=1,...,k$, and every other variable to itself. Applying that substitution to a term $t$ is written in postfix notation as $t \{x_1 \mapsto t_1, ..., x_k \mapsto t_k\}$; it means to (simultaneously) replace every occurrence of each variable $x_i$ in the term $t$ by $t_i$. The result $t\tau$ of applying a substitution $\tau$ to a term $t$ is called an instance of that term $t$.
|
46 |
+
|
47 |
+
As a first-order example, applying the substitution { x ↦ h(a,y), z ↦ b } to the term
|
48 |
+
|
49 |
+
If a term $t$ has an instance equivalent to a term $u$, that is, if $t\sigma \equiv u$ for some substitution $\sigma$, then $t$ is called more general than $u$, and $u$ is called more special than, or subsumed by, $t$. For example, $x\oplus a$ is more general than $a\oplus b$ if ⊕ is commutative, since then $(x\oplus a) \{x\mapsto b\} = b\oplus a\equiv a\oplus b$.
|
50 |
+
|
51 |
+
If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other.
|
52 |
+
|
53 |
+
For example,
|
54 |
+
$$
|
55 |
+
f(x_1, a, g(z_1), y_1)
|
56 |
+
$$
|
57 |
+
|
58 |
+
is a variant of
|
59 |
+
$$
|
60 |
+
f(x_2, a, g(z_2), y_2)
|
61 |
+
$$,
|
62 |
+
|
63 |
+
since
|
64 |
+
|
65 |
+
<div style="text-align: center">
|
66 |
+
$$
|
67 |
+
f(x_1, a, g(z_1), y_1) \{x_1 \mapsto x_2, y_1 \mapsto y_2, z_1 \mapsto z_2\} = f(x_2, a, g(z_2), y_2)
|
68 |
+
$$
|
69 |
+
|
70 |
+
</div>
|
71 |
+
|
72 |
+
and
|
73 |
+
|
74 |
+
<div style="text-align: center">
|
75 |
+
$$
|
76 |
+
f(x_2, a, g(z_2), y_2) \{x_2 \mapsto x_1, y_2 \mapsto y_1, z_2 \mapsto z_1\} = f(x_1, a, g(z_1), y_1)
|
77 |
+
$$.
|
78 |
+
|
79 |
+
</div>
|
80 |
+
|
81 |
+
However, $f(x_1, a, g(z_1), y_1)$ is not a variant of $f(x_2, a, g(x_2), x_2)$, since no substitution can transform the latter term into the former one.
|
82 |
+
|
83 |
+
The latter term is therefore properly more special than the former one.
|
84 |
+
|
85 |
+
For arbitrary $\equiv$, a term may be both more general and more special than a structurally different term.
|
86 |
+
|
87 |
+
For example, if ⊕ is idempotent, that is, if always $x \oplus x \equiv x$, then the term $x\oplus y$ is more general than $z$, and vice versa, although $x\oplus y$ and $z$ are of different structure.
|
88 |
+
|
89 |
+
A substitution $\sigma$ is more special than, or subsumed by, a substitution $\tau$ if $t\sigma$ is more special than $t\tau$ for each term $t$. We also say that $\tau$ is more general than $\sigma$.
|
90 |
+
|
91 |
+
For instance $ \{x \mapsto a, y \mapsto a \}$ is more special than $\tau = \{x\mapsto y\}$,
|
92 |
+
|
93 |
+
but
|
94 |
+
$$
|
95 |
+
\sigma = \{x\mapsto a\}
|
96 |
+
$$ is not,
|
97 |
+
|
98 |
+
as $f(x, y)\sigma = f(a, y)$ is not more special than
|
99 |
+
$$
|
100 |
+
f(x, y) \tau = f(y, y)
|
101 |
+
$$.
|
102 |
+
|
103 |
+
A unification problem is a finite set { l<sub>1</sub> ≐ r<sub>1</sub>, ..., l<sub>n</sub> ≐ r<sub>n</sub> } of potential equations, where l<sub>i</sub>, r<sub>i</sub> ∈ T.
|
104 |
+
|
105 |
+
A substitution σ is a solution of that problem if l<sub>i</sub>σ ≡ r<sub>i</sub>σ for $i = 1, ..., n$. Such a substitution is also called a unifier of the unification problem.
|
106 |
+
|
107 |
+
For example, if ⊕ is associative, the unification problem { x ⊕ a ≐ a ⊕ x } has the solutions {x ↦ a}, {x ↦ a ⊕ a}, {x ↦ a ⊕ a ⊕ a}, etc., while the problem { x ⊕ a ≐ a } has no solution.
|
108 |
+
|
109 |
+
For a given unification problem, a set S of unifiers is called complete if each solution substitution is subsumed by some substitution σ ∈ S; the set S is called minimal if none of its members subsumes another one.
|
110 |
+
|
111 |
+
Syntactic unification of first-order terms is the most widely used unification framework.
|
112 |
+
|
113 |
+
It is based on T being the set of first-order terms (over some given set V of variables, C of constants and F<sub>n</sub> of n-ary function symbols) and on ≡ being syntactic equality.
|
114 |
+
|
115 |
+
In this framework, each solvable unification problem {l<sub>1</sub> ≐ r<sub>1</sub>, ..., l<sub>n</sub> ≐ r<sub>n</sub>} has a complete, and obviously minimal, singleton solution set {σ}.
|
116 |
+
|
117 |
+
Its member σ is called the most general unifier (mgu) of the problem.
|
118 |
+
|
119 |
+
The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e. l<sub>1</sub>σ = r<sub>1</sub>σ ∧ ... ∧ l<sub>n</sub>σ = r<sub>n</sub>σ.
|
120 |
+
|
121 |
+
Any unifier of the problem is subsumed by the mgu σ.
|
122 |
+
|
123 |
+
The mgu is unique up to variants: if S<sub>1</sub> and S<sub>2</sub> are both complete and minimal solution sets of the same syntactical unification problem, then S<sub>1</sub> = { σ<sub>1</sub> } and S<sub>2</sub> = { σ<sub>2</sub> } for some substitutions σ<sub>1</sub> and σ<sub>2</sub>, and xσ<sub>1</sub> is a variant of xσ<sub>2</sub> for each variable x occurring in the problem.
|
124 |
+
|
125 |
+
For example, the unification problem { x ≐ z, y ≐ f(x) } has a unifier { x ↦ z, y ↦ f(z) }, because
|
126 |
+
|
127 |
+
This is also the most general unifier.
|
128 |
+
|
129 |
+
Other unifiers for the same problem are e.g. { x ↦ f(x<sub>1</sub>), y ↦ f(f(x<sub>1</sub>)), z ↦ f(x<sub>1</sub>) }, { x ↦ f(f(x<sub>1</sub>)), y ↦ f(f(f(x<sub>1</sub>))), z ↦ f(f(x<sub>1</sub>)) }, and so on; there are infinitely many similar unifiers.
|
130 |
+
|
131 |
+
As another example, the problem g(x,x) ≐ f(y) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermost g and f, respectively, and terms with different outermost function symbols are syntactically different.
|
132 |
+
|
133 |
+
{{Quote box|title=Robinson's 1965 unification algorithm
|
134 |
+
|
135 |
+
|quote=
|
136 |
+
|
137 |
+
Symbols are ordered such that variables precede function symbols.
|
138 |
+
|
139 |
+
Terms are ordered by increasing written length; equally long terms
|
140 |
+
|
141 |
+
are ordered lexicographically. For a set T of terms, its disagreement
|
142 |
+
|
143 |
+
path p is the lexicographically least path where two member terms
|
144 |
+
|
145 |
+
of T differ. Its disagreement set is the set of subterms starting at p,
|
146 |
+
|
147 |
+
formally: { t}.
|
148 |
+
|
149 |
+
Algorithm:
|
150 |
+
|
151 |
+
<code><br>
|
152 |
+
|
153 |
+
Given a set T of terms to be unified<br>
|
154 |
+
|
155 |
+
Let $\sigma$ initially be the identity substitution<br>
|
156 |
+
|
157 |
+
do forever<br>
|
158 |
+
|
159 |
+
if $T\sigma$ is a singleton set then<br>
|
160 |
+
|
161 |
+
return $\sigma$<br>
|
162 |
+
|
163 |
+
fi<br>
|
164 |
+
|
165 |
+
let D be the disagreement set of $T\sigma$<br>
|
166 |
+
|
167 |
+
let s, t be the two lexicographically least terms in D<br>
|
168 |
+
|
169 |
+
if s is not a variable or s occurs in t then<br>
|
170 |
+
|
171 |
+
return "NONUNIFIABLE"<br>
|
172 |
+
|
173 |
+
fi <br>
|
174 |
+
|
175 |
+
$\sigma := \sigma \{ s \mapsto t \}$<br>
|
176 |
+
|
177 |
+
done
|
178 |
+
|
179 |
+
</code>
|
180 |
+
|
181 |
+
}}
|
182 |
+
|
183 |
+
The first algorithm given by Robinson (1965) was rather inefficient; cf. box.
|
184 |
+
|
185 |
+
The following faster algorithm originated from Martelli, Montanari (1982).
|
186 |
+
|
187 |
+
This paper also lists preceding attempts to find an efficient syntactical unification algorithm, and states that linear-time algorithms were discovered independently by Martelli, Montanari (1976)
|
188 |
+
|
189 |
+
* A,C,N<sub>l</sub>
|
190 |
+
|
191 |
+
* K4 modal algebras
|
192 |
+
|
193 |
+
Unification is semi-decidable for the following theories:
|
194 |
+
|
195 |
+
* A,D<sub>l</sub>,D<sub>r</sub>
|
196 |
+
|
197 |
+
* A,C,D<sub>l</sub>
|
198 |
+
|
199 |
+
* Commutative rings
|
200 |
+
|
201 |
+
If there is a convergent term rewriting system R available for E,
|
202 |
+
|
203 |
+
the one-sided paramodulation algorithm
|
204 |
+
|
205 |
+
can be used to enumerate all solutions of given equations.
|
206 |
+
|
207 |
+
Starting with G being the unification problem to be solved and S being the identity substitution, rules are applied nondeterministically until the empty set appears as the actual G, in which case the actual S is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation from G, and on the choice of Rs rules in mutate, different computations paths are possible. Only some lead to a solution, while others end at a G ≠ {} where no further rule is applicable (e.g. G = { f(...) ≐ g(...) }).
|
208 |
+
|
209 |
+
For an example, a term rewrite system R is used defining the append operator of lists built from cons and nil; where cons(x,y) is written in infix notation as x.y for brevity; e.g. app(a.b.nil,c.d.nil) → a.app(b.nil,c.d.nil) → a.b.app(nil,c.d.nil) → a.b.c.d.nil demonstrates the concatenation of the lists a.b.nil and c.d.nil, employing the rewrite rule 2,2, and 1. The equational theory E corresponding to R is the congruence closure of R, both viewed as binary relations on terms.
|
210 |
+
|
211 |
+
For example, app(a.b.nil,c.d.nil) ≡ a.b.c.d.nil ≡ app(a.b.c.d.nil,nil). The paramodulation algorithm enumerates solutions to equations with respect to that E when fed with the example R.
|
212 |
+
|
213 |
+
A successful example computation path for the unification problem { app(x,app(y,x)) ≐ a.a.nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rule mutate; v<sub>2</sub>, v<sub>3</sub>, ... are computer-generated variable names for this purpose. In each line, the chosen equation from G is highlighted in red. Each time the mutate rule is applied, the chosen rewrite rule (1 or 2) is indicated in parentheses. From the last line, the unifying substitution S = { y ↦ nil, x ↦ a.nil } can be obtained. In fact,
|
214 |
+
|
215 |
+
app(x,app(y,x)) {y↦nil, x↦ a.nil } = app(a.nil,app(nil,a.nil)) ≡ app(a.nil,a.nil) ≡ a.app(nil,a.nil) ≡ a.a.nil solves the given problem.
|
216 |
+
|
217 |
+
A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitution S = { y ↦ a.a.nil, x ↦ nil }; it is not shown here. No other path leads to a success.
|
218 |
+
|
219 |
+
If R is a convergent term rewriting system for E,
|
220 |
+
|
221 |
+
an approach alternative to the previous section consists in successive application of "narrowing steps";
|
222 |
+
|
223 |
+
this will eventually enumerate all solutions of a given equation.
|
224 |
+
|
225 |
+
A narrowing step (cf. picture) consists in
|
226 |
+
|
227 |
+
* choosing a nonvariable subterm of the current term,
|
228 |
+
|
229 |
+
* syntactically unifying it with the left hand side of a rule from R, and
|
230 |
+
|
231 |
+
* replacing the instantiated rule's right hand side into the instantiated term.
|
232 |
+
|
233 |
+
Formally, if l → r is a renamed copy of a rewrite rule from R, having no variables in common with a term s, and the subterm s is not a variable and is unifiable with l via the mgu σ, then s can be narrowed to the term t = sσ[rσ]<sub>p</sub>, i.e. to the term sσ, with the subterm at p replaced by rσ. The situation that s can be narrowed to t is commonly denoted as s ↝ t.
|
234 |
+
|
235 |
+
Intuitively, a sequence of narrowing steps t<sub>1</sub> ↝ t<sub>2</sub> ↝ ... ↝ t<sub>n</sub> can be thought of as a sequence of rewrite steps t<sub>1</sub> → t<sub>2</sub> → ... → t<sub>n</sub>, but with the initial term t<sub>1</sub> being further and further instantiated, as necessary to make each of the used rules applicable.
|
236 |
+
|
237 |
+
The above example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here):
|
238 |
+
|
239 |
+
The last term, v<sub>2</sub>.v<sub>2</sub>.nil can be syntactically unified with the original right hand side term a.a.nil.
|
240 |
+
|
241 |
+
The narrowing lemma ensures that whenever an instance of a term s can be rewritten to a term t by a convergent term rewriting system, then s and t can be narrowed and rewritten to a term s and t, respectively, such that t is an instance of s.
|
242 |
+
|
243 |
+
Formally: whenever sσ holds for some substitution σ, then there exist terms s such that s and t and s for some substitution τ.
|
244 |
+
|
245 |
+
Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often called higher-order unification. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions. Such unification problems do not have most general unifiers. While higher-order unification is undecidable, Gérard Huet gave a semi-decidable (pre-)unification algorithm that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet and Gilles Dowek have written articles surveying this topic.
|
246 |
+
|
247 |
+
Dale Miller has described what is now called higher-order pattern unification. This subset of higher-order unification is decidable and solvable unification problems have most-general unifiers. Many computer systems that contain higher-order unification, such as the higher-order logic programming languages λProlog and Twelf, often implement only the pattern fragment and not full higher-order unification.
|
248 |
+
|
249 |
+
In computational linguistics, one of the most influential theories of ellipsis is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification (HOU). For instance, the semantic representation of "Jon likes Mary and Peter does too" is like(j, m) ∧ R(p) and the value of R (the semantic representation of the ellipsis) is determined by the equation like(j, m) = R(j) . The process of solving such equations is called Higher-Order Unification.
|
250 |
+
|
251 |
+
For example, the unification problem { f(a, b, a) ≐ d(b, a, c) }, where the only variable is f, has the
|
252 |
+
|
253 |
+
solutions {f ↦ λx.λy.λz.d(y, x, c) }, {f ↦ λx.λy.λz.d(y, z, c) },
|
254 |
+
|
255 |
+
{f ↦ λx.λy.λz.d(y, a, c) }, {f ↦ λx.λy.λz.d(b, x, c) },
|
256 |
+
|
257 |
+
{f ↦ λx.λy.λz.d(b, z, c) } and {f ↦ λx.λy.λz.d(b, a, c) }.
|
258 |
+
|
259 |
+
Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory.
|
wiki/wikipedia/1016.txt
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In abstract algebra, Jacobson's conjecture is an open problem in ring theory concerning the intersection of powers of the Jacobson radical of a Noetherian ring.
|
2 |
+
|
3 |
+
It has only been proven for special types of Noetherian rings, so far. Examples exist to show that the conjecture can fail when the ring is not Noetherian on a side, so it is absolutely necessary for the ring to be two-sided Noetherian.
|
4 |
+
|
5 |
+
The conjecture is named for the algebraist Nathan Jacobson who posed the first version of the conjecture.
|
6 |
+
|
7 |
+
For a ring R with Jacobson radical J, the nonnegative powers $J^n$ are defined by using the product of ideals.
|
8 |
+
|
9 |
+
Jacobson's conjecture: In a right-and-left Noetherian ring, $\bigcap_{n\in \mathbb{N}}J^n=\{0\}.$
|
10 |
+
|
11 |
+
In other words: "The only element of a Noetherian ring in all powers of J is 0."
|
12 |
+
|
13 |
+
The original conjecture posed by Jacobson in 1956 asked about noncommutative one-sided Noetherian rings, however Israel Nathan Herstein produced a counterexample in 1965, and soon afterwards, Arun Vinayak Jategaonkar produced a different example which was a left principal ideal domain. From that point on, the conjecture was reformulated to require two-sided Noetherian rings.
|
14 |
+
|
15 |
+
Jacobson's conjecture has been verified for particular types of Noetherian rings:
|
16 |
+
|
17 |
+
* Commutative Noetherian rings all satisfy Jacobson's conjecture. This is a consequence of the Krull intersection theorem.
|
18 |
+
|
19 |
+
* Fully bounded Noetherian rings
|
20 |
+
|
21 |
+
* Noetherian rings with Krull dimension 1
|
22 |
+
|
23 |
+
* Noetherian rings satisfying the second layer condition
|
wiki/wikipedia/1017.txt
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Visual calculus, invented by Mamikon Mnatsakanian (known as Mamikon), is an approach to solving a variety of integral calculus problems. Many problems that would otherwise seem quite difficult yield to the method with hardly a line of calculation, often reminiscent of what Martin Gardner calls "aha! solutions" or Roger Nelsen a proof without words.
|
2 |
+
|
3 |
+
Mamikon devised his method in 1959 while an undergraduate, first applying it to a well-known geometry problem: find the area of a ring (annulus), given the length of a chord tangent to the inner circumference. Perhaps surprisingly, no additional information is needed; the solution does not depend on the ring's inner and outer dimensions.
|
4 |
+
|
5 |
+
The traditional approach involves algebra and application of the Pythagorean theorem. Mamikon's method, however, envisions an alternate construction of the ring: first the inner circle alone is drawn, then a constant-length tangent is made to travel along its circumference, "sweeping out" the ring as it goes.
|
6 |
+
|
7 |
+
Now if all the (constant-length) tangents used in constructing the ring are translated so that their points of tangency coincide, the result is a circular disk of known radius (and easily computed area). Indeed, since the inner circle's radius is irrelevant, one could just as well have started with a circle of radius zero (a point)-and sweeping out a ring around a circle of zero radius is indistinguishable from simply rotating a line segment about one of its endpoints and sweeping out a disk.
|
8 |
+
|
9 |
+
Mamikon's insight was to recognize the equivalence of the two constructions; and because they are equivalent, they yield equal areas. Moreover, so long as it is given that the tangent length is constant, the two starting curves need not be circular-a finding not easily proven by more traditional geometric methods. This yields Mamikon's theorem:
|
10 |
+
|
11 |
+
The area of a tangent sweep is equal to the area of its tangent cluster, regardless of the shape of the original curve.
|
12 |
+
|
13 |
+
The area of a cycloid can be calculated by considering the area between it and an enclosing rectangle. These tangents can all be clustered to form a circle. If the circle generating the cycloid has radius r then this circle also has radius r and area πr<sup>2</sup>. The area of the rectangle is 2r × 2πr = 4πr<sup>2</sup>. Therefore the area of the cycloid is 3πr<sup>2</sup>: it is 3 times the area of the generating circle.
|
14 |
+
|
15 |
+
The tangent cluster can be seen to be a circle because the cycloid is generated by a circle and the tangent to the cycloid will be at right angle to the line from the generating point to the rolling point. Thus the tangent and the line to the contact point form a right-angled triangle in the generating circle. This means that clustered together the tangents will describe the shape of the generating circle.
|
wiki/wikipedia/1018.txt
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The problem in number theory known as "Fermat's Last Theorem" has repeatedly received attention in fiction and popular culture. It was proved by Andrew Wiles in 1994.
|
2 |
+
|
3 |
+
* Fermat's equation appears in the 2000 film Bedazzled with Elizabeth Hurley and Brendan Fraser. Hurley plays the devil who, in one of her many forms, appears as a school teacher who assigns Fermat's Last Theorem as a homework problem. in which Jadzia Dax comments that one of her previous hosts, Tobin Dax, had "the most original approach to the proof since Wiles over 300 years ago."
|
4 |
+
|
5 |
+
* A sum, proved impossible by the theorem, appears in the 1995 episode of The Simpsons, "Treehouse of Horror VI". In the three-dimensional world in "Homer<sup>3</sup>", the equation $1782^{12} + 1841^{12} = 1922^{12}$ is visible, just as the dimension begins to collapse. The joke is that the twelfth root of the sum does evaluate to 1922 due to rounding errors when entered into most handheld calculators; the left hand side is odd, while $1922^{12}$ is even, so the equality cannot hold. (The twelfth root of the left-hand side is not 1922, but approximately 1921.99999996.) A second "counterexample" appeared in the 1998 episode, "The Wizard of Evergreen Terrace": $3987^{12} + 4365^{12} = 4472^{12}$. These agree to 10 of 44 decimal digits, but simple divisibility rules show 3987 and 4365 are multiples of 3 so that a sum of their powers is also. The same rule reveals that 4472 is not divisible by 3, so that this "equation" cannot hold either.
|
6 |
+
|
7 |
+
* In the Doctor Who 2010 episode "The Eleventh Hour", the Doctor transmits a proof of Fermat's Last Theorem by typing it in just a few seconds on a laptop, to prove his genius to a collection of world leaders discussing the latest threat to the human race.
|
8 |
+
|
9 |
+
* In Tom Stoppard's 1993 play Arcadia, Septimus Hodge poses the problem of proving Fermat's Last Theorem to the precocious Thomasina Coverly (who is perhaps a mathematical prodigy), in an attempt to keep her busy. Thomasina responds that Fermat had no proof and claimed otherwise in order to torment later generations. Shortly after Arcadia opened in London, Andrew Wiles announced his proof of Fermat's Last Theorem, a coincidence of timing that resulted in news stories about the proof quoting Stoppard.
|
10 |
+
|
11 |
+
* Fermat's Last Tango is a 2000 stage musical by Joanne Sydney Lessner and Joshua Rosenblum. Protagonist "Daniel Keane" is a fictionalized Andrew Wiles. The characters include Fermat, Pythagoras, Euclid, Newton, and Gauss, the singing, dancing mathematicians of "the aftermath".
|
wiki/wikipedia/1019.txt
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the Cauchy–Hadamard theorem is a result in complex analysis named after the French mathematicians Augustin Louis Cauchy and Jacques Hadamard, describing the radius of convergence of a power series. It was published in 1821 by Cauchy, but remained relatively unknown until Hadamard rediscovered it. Hadamard's first publication of this result was in 1888; he also included it as part of his 1892 Ph.D. thesis.
|
2 |
+
|
3 |
+
Consider the formal power series in one complex variable z of the form
|
4 |
+
|
5 |
+
<math display="block">f(z) = \sum_{n = 0}^{\infty} c_{n} (z-a)^{n}</math>
|
6 |
+
|
7 |
+
where $a, c_n \in \Complex.$
|
8 |
+
|
9 |
+
Then the radius of convergence $R$ of f at the point a is given by
|
10 |
+
|
11 |
+
<math display="block">\frac{1}{R} = \limsup_{n \to \infty} \left( | c_{n} |^{1/n} \right)</math>
|
12 |
+
|
13 |
+
where lim sup denotes the limit superior, the limit as n approaches infinity of the supremum of the sequence values after the nth position. If the sequence values are unbounded so that the lim sup is ∞, then the power series does not converge near a, while if the lim sup is 0 then the radius of convergence is ∞, meaning that the series converges on the entire plane.
|
14 |
+
|
15 |
+
Without loss of generality assume that $a=0$. We will show first that the power series $\sum_n c_n z^n$ converges for $|z|<R$, and then that it diverges for $|z|>R$.
|
16 |
+
|
17 |
+
First suppose $|z|<R$. Let $t=1/R$ not be $0$ or $\pm\infty.$
|
18 |
+
|
19 |
+
For any $\varepsilon > 0$, there exists only a finite number of $n$ such that $\sqrt[n] \geq t+\varepsilon$.
|
20 |
+
|
21 |
+
Now $|c_n| \leq (t+\varepsilon)^n$ for all but a finite number of $c_n$, so the series $\sum_n c_n z^n$ converges if $|z| < 1/(t+\varepsilon)$. This proves the first part.
|
22 |
+
|
23 |
+
Conversely, for $\varepsilon > 0$, $|c_n|\geq (t-\varepsilon)^n$ for infinitely many $c_n$, so if $|z|=1/(t-\varepsilon) > R$, we see that the series cannot converge because its nth term does not tend to 0.
|
24 |
+
|
25 |
+
Let $\alpha$ be a multi-index (a n-tuple of integers) with $|\alpha|=\alpha_1+\cdots+\alpha_n$, then $f(x)$ converges with radius of convergence $\rho$ (which is also a multi-index) if and only if
|
26 |
+
|
27 |
+
<math display="block">\limsup_{|\alpha|\to\infty} \sqrt[|\alpha|]{|c_\alpha|\rho^\alpha}=1</math>
|
28 |
+
|
29 |
+
to the multidimensional power series
|
30 |
+
|
31 |
+
<math display="block">\sum_{\alpha\geq0}c_\alpha(z-a)^\alpha := \sum_{\alpha_1\geq0,\ldots,\alpha_n\geq0}c_{\alpha_1,\ldots,\alpha_n}(z_1-a_1)^{\alpha_1}\cdots(z_n-a_n)^{\alpha_n}</math>
|
32 |
+
|
33 |
+
The proof can be found in
|
wiki/wikipedia/102.txt
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In graph theory and theoretical computer science, the longest path problem is the problem of finding a simple path of maximum length in a given graph. A path is called simple if it does not have any repeated vertices; the length of a path may either be measured by its number of edges, or (in weighted graphs) by the sum of the weights of its edges. In contrast to the shortest path problem, which can be solved in polynomial time in graphs without negative-weight cycles, the longest path problem is NP-hard and the decision version of the problem, which asks whether a path exists of at least some given length, is NP-complete. This means that the decision problem cannot be solved in polynomial time for arbitrary graphs unless P = NP. Stronger hardness results are also known showing that it is difficult to approximate. However, it has a linear time solution for directed acyclic graphs, which has important applications in finding the critical path in scheduling problems.
|
2 |
+
|
3 |
+
The NP-hardness of the unweighted longest path problem can be shown using a reduction from the Hamiltonian path problem: a graph G has a Hamiltonian path if and only if its longest path has length n − 1, where n is the number of vertices in G. Because the Hamiltonian path problem is NP-complete, this reduction shows that the decision version of the longest path problem is also NP-complete. In this decision problem, the input is a graph G and a number k; the desired output is "yes" if G contains a path of k or more edges, and no otherwise.
|
4 |
+
|
5 |
+
If the longest path problem could be solved in polynomial time, it could be used to solve this decision problem, by finding a longest path and then comparing its length to the number k. Therefore, the longest path problem is NP-hard. The question "does there exist a simple path in a given graph with at least k edges" is NP-complete.
|
6 |
+
|
7 |
+
In weighted complete graphs with non-negative edge weights, the weighted longest path problem is the same as the Travelling salesman path problem, because the longest path always includes all vertices.
|
8 |
+
|
9 |
+
A longest path between two given vertices s and t in a weighted graph G is the same thing as a shortest path in a graph −G derived from G by changing every weight to its negation. Therefore, if shortest paths can be found in −G, then longest paths can also be found in G. For a DAG, the longest path from a source vertex to all other vertices can be obtained by running the shortest-path algorithm on −G.
|
10 |
+
|
11 |
+
Similarly, for each vertex v in a given DAG, the length of the longest path ending at v may be obtained by the following steps:
|
12 |
+
|
13 |
+
# Find a topological ordering of the given DAG.
|
14 |
+
|
15 |
+
# For each vertex v of the DAG, in the topological ordering, compute the length of the longest path ending at v by looking at its incoming neighbors and adding one to the maximum length recorded for those neighbors. If v has no incoming neighbors, set the length of the longest path ending at v to zero. In either case, record this number so that later steps of the algorithm can access it.
|
16 |
+
|
17 |
+
Once this has been done, the longest path in the whole DAG may be obtained by starting at the vertex v with the largest recorded value, then repeatedly stepping backwards to its incoming neighbor with the largest recorded value, and reversing the sequence of vertices found in this way.
|
18 |
+
|
19 |
+
This is equivalent to running the shortest-path algorithm on −G.
|
20 |
+
|
21 |
+
The critical path method for scheduling a set of activities involves the construction of a directed acyclic graph in which the vertices represent project milestones and the edges represent activities that must be performed after one milestone and before another; each edge is weighted by an estimate of the amount of time the corresponding activity will take to complete. In such a graph, the longest path from the first milestone to the last one is the critical path, which describes the total time for completing the project.
|
22 |
+
|
23 |
+
The best polynomial time approximation algorithm known for this case achieves only a very weak approximation ratio, $ n/\exp(\Omega(\sqrt{\log n}))$. For all <span style="display: inline-block;">$\epsilon>0$,</span> it is not possible to approximate the longest path to within a factor of $2^{(\log n)^{1-\epsilon}}$ unless NP is contained within quasi-polynomial deterministic time; however, there is a big gap between this inapproximability result and the known approximation algorithms for this problem.
|
24 |
+
|
25 |
+
In the case of unweighted but directed graphs, strong inapproximability results are known. For every $\epsilon>0$ the problem cannot be approximated to within a factor of $n^{1-\epsilon}$ unless P = NP, and with stronger complexity-theoretic assumptions it cannot be approximated to within a factor of $n/\log^{2+\epsilon} n$.
|
26 |
+
|
27 |
+
The longest path problem is fixed-parameter tractable when parameterized by the length of the path. For instance, it can be solved in time linear in the size of the input graph (but exponential in the length of the path), by an algorithm that performs the following steps:
|
28 |
+
|
29 |
+
# Perform a depth-first search of the graph. Let $d$ be the depth of the resulting depth-first search tree.
|
30 |
+
|
31 |
+
# Use the sequence of root-to-leaf paths of the depth-first search tree, in the order in which they were traversed by the search, to construct a path decomposition of the graph, with pathwidth $d$.
|
32 |
+
|
33 |
+
# Apply dynamic programming to this path decomposition to find a longest path in time $O(d!2^dn)$, where $n$ is the number of vertices in the graph.
|
34 |
+
|
35 |
+
Since the output path has length at least as large as $d$, the running time is also bounded by $O(\ell!2^\ell n)$, where $\ell$ is the length of the longest path. Using color-coding, the dependence on path length can be reduced to singly exponential. $\ln(n)$.
|
wiki/wikipedia/1020.txt
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the Hilbert–Speiser theorem is a result on cyclotomic fields, characterising those with a normal integral basis. More generally, it applies to any finite abelian extension of Q, which by the Kronecker–Weber theorem are isomorphic to subfields of cyclotomic fields.
|
2 |
+
|
3 |
+
Hilbert–Speiser Theorem. A finite abelian extension K/Q has a normal integral basis if and only if it is tamely ramified over Q.
|
4 |
+
|
5 |
+
This is the condition that it should be a subfield of Q(ζ<sub>n</sub>) where n is a squarefree odd number. This result was introduced by in his Zahlbericht and by .
|
6 |
+
|
7 |
+
In cases where the theorem states that a normal integral basis does exist, such a basis may be constructed by means of Gaussian periods. For example if we take n a prime number p > 2, Q(ζ<sub>p</sub>) has a normal integral basis consisting of all the p-th roots of unity other than 1. For a field K contained in it, the field trace can be used to construct such a basis in K also (see the article on Gaussian periods). Then in the case of n squarefree and odd, Q(ζ<sub>n</sub>) is a compositum of subfields of this type for the primes p dividing n (this follows from a simple argument on ramification). This decomposition can be used to treat any of its subfields.
|
8 |
+
|
9 |
+
proved a converse to the Hilbert–Speiser theorem:
|
10 |
+
|
11 |
+
Each finite tamely ramified abelian extension K of a fixed number field J has a relative normal integral basis if and only if J =Q.
|
12 |
+
|
13 |
+
There is an elliptic analogue of the Hilbert- Speiser theorem proven by .
|
14 |
+
|
15 |
+
It is now called the Srivastav-Taylor theorem .
|
wiki/wikipedia/1021.txt
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the Brunn–Minkowski theorem (or Brunn–Minkowski inequality) is an inequality relating the volumes (or more generally Lebesgue measures) of compact subsets of Euclidean space. The original version of the Brunn–Minkowski theorem (Hermann Brunn 1887; Hermann Minkowski 1896) applied to convex sets; the generalization to compact nonconvex sets stated here is due to Lazar Lyusternik (1935).
|
2 |
+
|
3 |
+
Let n ≥ 1 and let μ denote the Lebesgue measure on R<sup>n</sup>. Let A and B be two nonempty compact subsets of R<sup>n</sup>. Then the following inequality holds:
|
4 |
+
$$
|
5 |
+
[ \mu (A + B) ]^{1/n} \geq [\mu (A)]^{1/n} + [\mu (B)]^{1/n},
|
6 |
+
$$
|
7 |
+
|
8 |
+
where A + B denotes the Minkowski sum:
|
9 |
+
$$
|
10 |
+
A + B := \{ a + b \in \mathbb{R}^{n} \mid a \in A,\ b \in B \}.
|
11 |
+
$$
|
12 |
+
|
13 |
+
The theorem is also true in the setting where $ A, B, A + B $ are only assumed to be measurable and non-empty.
|
14 |
+
|
15 |
+
The Brunn–Minkowski inequality implies a multiplicative version, using the inequality $ \lambda x + (1 - \lambda) y \geq x^{\lambda} y^{1-\lambda} $, which holds for $ x,y \geq 0, \lambda \in [0,1] $. In particular, $ \mu(\lambda A + (1 - \lambda) B) \geq ( \lambda \mu(A)^{1/n} + ( 1 - \lambda) \mu(B)^{1/n})^n \geq \mu(A)^{\lambda} \mu(B)^{1 - \lambda} $. The Prékopa–Leindler inequality is a functional generalization of this version of Brunn–Minkowski.
|
16 |
+
|
17 |
+
It is possible for $ A,B $ to be Lebesgue measurable and $ A + B $ to not be; a counter example can be found in On the other hand, if $ A,B $ are Borel measurable, then $ A + B $ is the continuous image of the Borel set $ A \times B $, so analytic and thus measurable. See the discussion in Gardner's survey for more on this, as well as ways to avoid measurability hypothesis.
|
18 |
+
|
19 |
+
We note that in the case that A and B are compact, so is A + B, being the image of the compact set $ A \times B $ under the continuous addition map : $ + : \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^n $, so the measurability conditions are easy to verify.
|
20 |
+
|
21 |
+
The condition that $ A,B $ are both non-empty is clearly necessary. This condition is not part of the multiplicative versions of BM stated below.
|
22 |
+
|
23 |
+
We give two well known proofs of Brunn–Minkowski.
|
24 |
+
|
25 |
+
We give a well-known argument that follows a general recipe of arguments in measure theory; namely, it establishes a simple case by direct analysis, uses induction to establish a finitary extension of that special case, and then uses general machinery to obtain the general case as a limit. A discussion of this history of this proof can be found in Theorem 4.1 in .
|
26 |
+
|
27 |
+
We prove the version of the Brunn–Minkowski theorem that only requires $ A, B, A+ B $ to be measurable and non-empty.
|
28 |
+
|
29 |
+
* The case that A and B are axis aligned boxes:
|
30 |
+
|
31 |
+
By translation invariance of volumes, it suffices to take $A = \prod_{i = 1}^n [0,a_i], B = \prod_{i = 1}^n [0,b_i] $. Then $ A + B = \prod_{i = 1}^n [0, a_i + b_i] $. In this special case, the Brunn–Minkowski inequality asserts that $ \prod ( a_i + b_i)^{1/n} \geq \prod a_i^{1/n} + \prod b_i^{1/n} $. After dividing both sides by $ \prod ( a_i + b_i)^{1/n} $ , this follows from the AM–GM inequality: $ (\prod \frac{ a_i}{a_i + b_i})^{1/n} + (\prod \frac{ b_i}{a_i + b_i})^{1/n} \leq \sum \frac{1}{n} \frac{ a_i + b_i} {a_i + b_i} = 1 $.
|
32 |
+
|
33 |
+
* The case where A and B are both disjoint unions of finitely many such boxes:
|
34 |
+
|
35 |
+
We will use induction on the total number of boxes, where the previous calculation establishes the base case of two boxes. First, we observe that there is an axis aligned hyperplane H that such that each side of H contains an entire box of A. To see this, it suffices to reduce to the case where A consists of two boxes, and then calculate that the negation of this statement implies that the two boxes have a point in common.
|
36 |
+
|
37 |
+
For a body X, we let $ X^{-}, X^+ $ denote the intersections of X with the "right" and "left" halfspaces defined by H. Noting again that the statement of Brunn–Minkowski is translation invariant, we then translate B so that $ \frac{ \mu(A^+)}{\mu(B^+)} = \frac{ \mu(A^-)} { \mu(B^-)} $; such a translation exists by the intermediate value theorem because $ t \to \mu( (B + tv)^+) $ is a continuous function, if v is perpendicular to H $ \frac{ \mu((B + tv) ^+)} { \mu((B + tv)^-)} $ has limiting values 0 and $ \infty $ as $ t \to -\infty, t \to \infty $, so takes on $ \frac{ \mu(A^+)}{\mu(A^-)} $ at some point.
|
38 |
+
|
39 |
+
We now have the pieces in place to complete the induction step. First, observe that $ A^+ + B^+ $ and $ A^- + B^- $ are disjoint subsets of $ A + B $, and so $\mu(A+B)\geq \mu(A^++B^+)+\mu(A^-+B^-).$ Now, $ A^+, A^- $ both have one fewer box than A, while $ B^+, B^- $ each have at most as many boxes as B. Thus, we can apply the induction hypothesis: $ \mu(A^+ +B^+) \geq (\mu(A^+)^{1/n}+\mu(B^+)^{1/n})^n $ and $ \mu(A^{-} +B^{-}) \geq (\mu(A^-)^{1/n}+\mu(B^-)^{1/n})^n $.
|
40 |
+
|
41 |
+
Elementary algebra shows that if $ \frac{ \mu(A^+)}{\mu(B^+)} = \frac{ \mu(A^-)} { \mu(B^-)} $, then also $ \frac{ \mu(A^+)}{\mu(B^+)} = \frac{ \mu(A^-)} { \mu(B^-)} = \frac{ \mu(A)} { \mu(B)} $, so we can calculate:
|
42 |
+
|
43 |
+
<math>
|
44 |
+
|
45 |
+
\begin{align}
|
46 |
+
|
47 |
+
\mu(A+B) \geq \mu(A^+ +B^+)+\mu(A^{-} +B^{-}) \geq (\mu(A^+)^{1/n}+\mu(B^+)^{1/n})^n+(\mu(A^-)^{1/n}+\mu(B^-)^{1/n})^n \\=\mu(B^+) ( 1 + \frac{ \mu(A^+)^{1/n}}{\mu(B^+)^{1/n}})^n + \mu(B^-) ( 1 + \frac{ \mu(A^-)^{1/n}} { \mu(B^-)^{1/n}})^n =( 1 + \frac{ \mu(A)^{1/n}} { \mu(B)^{1/n}})^n ( \mu(B^+) + \mu(B^-) ) =(\mu(B)^{1/n}+\mu(A)^{1/n})^n
|
48 |
+
|
49 |
+
\end{align}
|
50 |
+
|
51 |
+
</math>
|
52 |
+
|
53 |
+
* The case that A and B are bounded open sets:
|
54 |
+
|
55 |
+
In this setting, both bodies can be approximated arbitrarily well by unions of disjoint axis aligned rectangles contained in their interior; this follows from general facts about the Lebesgue measure of open sets. That is, we have a sequence of bodies $ A_k \subseteq A $, which are disjoint unions of finitely many axis aligned rectangles, where $ \mu ( A \setminus A_k) \leq 1/k $, and likewise $ B_k \subseteq B $. Then we have that $ A + B \supseteq A_k + B_k $, so $\mu(A + B)^{1/n} \geq \mu(A_k + B_k)^{1/n} \geq \mu(A_k)^{1/n} + \mu(B_k)^{1/n} $. The right hand side converges to $\mu(A)^{1/n} + \mu(B)^{1/n} $ as $ k \to \infty $, establishing this special case.
|
56 |
+
|
57 |
+
* The case that A and B are compact sets:
|
58 |
+
|
59 |
+
For a compact body X, define $ X_{\epsilon} = X + B(0,\epsilon) $ to be the $ \epsilon $-thickening of X. Here each $ B(0,\epsilon) $ is the open ball of radius $ \epsilon $, so that $ X_{\epsilon} $ is a bounded, open set. We note that $ \bigcap_{\epsilon > 0} X_{\epsilon} = \text{cl}(X) $, so that if X is compact, then $ \lim_{\epsilon \to 0} \mu(X_\epsilon) = \mu(X) $. By using associativity and commutativity of Minkowski sum, along with the previous case, we can calculate that <math>\mu( (A + B)_{2 \epsilon})^{1/n} = \mu(A_{\epsilon} + B_{\epsilon})^{1/n}
|
60 |
+
|
61 |
+
\geq \mu(A_{\epsilon})^{1/n} + \mu(B_{\epsilon})^{1/n} </math>. Sending $ \epsilon $ to 0 establishes the result.
|
62 |
+
|
63 |
+
* The case of bounded measurable sets:
|
64 |
+
|
65 |
+
Recall that by the regularity theorem for Lebesgue measure for any bounded measurable set X, and for any $ k>\geq $, there is a compact set $ X_{k} \subseteq X $ with $ \mu( X \setminus X_{k} ) < 1/k $. Thus, $ \mu(A + B) \geq \mu(A_k + B_k) \geq ( \mu(A_k)^{1/n} + \mu(B_k)^{1/n})^n $ for all k, using the case of Brunn–Minkowski shown for compact sets. Sending $ k \to \infty $ establishes the result.
|
66 |
+
|
67 |
+
* The case of measurable sets:
|
68 |
+
|
69 |
+
We let $ A_k = [-k,k]^n \cap A, B_k = [-k,k]^n \cap B $, and again argue using the previous case that $ \mu(A + B) \geq \mu(A_k + B_k) \geq ( \mu(A_k)^{1/n} + \mu(B_k)^{1/n})^n $, hence the result follows by sending k to infinity.
|
70 |
+
|
71 |
+
We give a proof of the Brunn–Minkowski inequality as a corollary to the Prékopa–Leindler inequality, a functional version of the BM inequality. We will first prove PL, and then show that PL implies a multiplicative version of BM, then show that multiplicative BM implies additive BM. The argument here is simpler than the proof via cuboids, in particular, we only need to prove the BM inequality in one dimensions. This happens because the more general statement of the PL-inequality than the BM-inequality allows for an induction argument.
|
72 |
+
|
73 |
+
* The multiplicative form of the BM inequality
|
74 |
+
|
75 |
+
First, we note that the Brunn–Minkowski inequality implies a multiplicative version, using the inequality $ \lambda x + (1 - \lambda) y \geq x^{\lambda} y^{\lambda} $, which holds for $ x,y \geq 0, \lambda \in [0,1] $. In particular, $ \mu(\lambda A + (1 - \lambda) B) \geq ( \lambda \mu(A)^{1/n} + ( 1 - \lambda) \mu(B)^{1/n})^n \geq \mu(A)^{\lambda} \mu(B)^{1 - \lambda} $. The Prékopa–Leindler inequality is a functional generalization of this version of Brunn–Minkowski.
|
76 |
+
|
77 |
+
* Prékopa–Leindler inequality
|
78 |
+
|
79 |
+
Theorem (Prékopa–Leindler inequality): Fix $ \lambda \in (0,1) $. Let $ f,g,h : \mathbb{R}^n \to \mathbb{R}_+ $ be non-negative, measurable functions satisfying $ h( \lambda x + (1 - \lambda)y) \geq f(x)^{\lambda} g(y)^{1 - \lambda} $ for all $ x,y \in \mathbb{R}^n $. Then $ \int_{\mathbb{R}^n} h(x) dx \geq (\int_{\mathbb{R}^n} f(x) dx)^{\lambda} (\int_{\mathbb{R}^n} g(x) dx)^{1 - \lambda} $.
|
80 |
+
|
81 |
+
Proof (Mostly following ):
|
82 |
+
|
83 |
+
We will need the one dimensional version of BM, namely that if <math display= "inline" > A, B, A + B \subseteq \mathbb{R} </math> are measurable, then <math display= "inline" > \mu (A + B) \geq \mu(A) + \mu(B) </math>. First, assuming that <math display= "inline" > A, B </math> are bounded, we shift <math display= "inline" > A, B </math> so that <math display= "inline" > A \cap B = \{0\} </math>. Thus, <math display= "inline" > A + B \supset A \cup B </math>, whence by almost disjointedness we have that <math display= "inline" > \mu(A + B) \geq \mu(A) + \mu(B) </math>. We then pass to the unbounded case by filtering with the intervals <math display= "inline" > [-k,k].</math>
|
84 |
+
|
85 |
+
We first show the $ n = 1 $ case of the PL inequality. Let $ L_h(t) = \{x : h(x) \geq t \} $, and note that $ L_h(t) \supseteq \lambda L_f(t) + (1 - \lambda) L_g(t) $. Thus, by the one-dimensional version of Brunn–Minkowski, we have that $ \mu( L_h(t)) \geq \mu( \lambda L_f(t) + (1 - \lambda) L_g(t) ) \geq \lambda \mu(L_f(t)) + (1 - \lambda) \mu( L_g(t)) $. We recall that if $ f(x) $ is non-negative, then Fubini's theorem implies $ \int_{\mathbb{R}} h(x) dx = \int_{t \geq 0} \mu( L_h(t)) dt $. Then, we have that $ \int_{\mathbb{R}} h(x) dx = \int_{t \geq 0} \mu( L_h(t)) dt \geq \lambda \int_{t \geq 0} \mu( L_f(t)) + (1 - \lambda) \int_{t \geq 0} \mu( L_g(t)) = \lambda \int_{\mathbb{R}} f(x) dx + (1 - \lambda) \int_{\mathbb{R}} g(x) dx \geq (\int_{\mathbb{R}} f(x) dx)^{\lambda} (\int_{\mathbb{R}} g(x) dx)^{1 - \lambda} $, where in the last step we use the weighted AM–GM inequality, which asserts that $ \lambda x + (1 - \lambda) y \geq x^{\lambda} y^{1- \lambda} $ for $ \lambda \in (0,1), x,y \geq 0$.
|
86 |
+
|
87 |
+
Now we prove the $ n > 1 $ case. For $ x,y \in \mathbb{R}^{n-1}, \alpha,\beta \in \mathbb{R} $, we pick $ \lambda \in [0,1] $ and set $ \gamma = \lambda \alpha + (1 - \lambda) \beta $. For any c, we define $ h_{c}(x) = h(x, c)$, that is, defining a new function on n-1 variables by setting the last variable to be $ c $. Applying the hypothesis and doing nothing but formal manipulation of the definitions, we have that $ h_{\gamma} ( \lambda x + (1 - \lambda)y ) = h ( \lambda x + (1 - \lambda) y , \lambda \alpha + ( 1 - \lambda) \beta) ) = h ( \lambda ( x, \alpha) + (1 - \lambda) (y, \beta) ) \geq f(x, \alpha)^{\lambda} g( y, \beta)^{1 - \lambda} = f_{\alpha}(x)^{\lambda} g_{\beta}(y)^{1 - \lambda} $.
|
88 |
+
|
89 |
+
Thus, by the inductive case applied to the functions $ h_{\gamma}, f_{\alpha}, g_{\beta} $, we obtain $ \int_{\mathbb{R}^{n-1}} h_{\gamma}(z) dz \geq (\int_{ \mathbb{R}^{n-1} } f_{\alpha}(z) dz )^{\lambda} (\int_{ \mathbb{R}^{n-1} } g_{\beta}(z) dz )^{1 - \lambda} $. We define $ H(\gamma) := \int_{\mathbb{R}^{n-1}} h_{\gamma}(z) dz $ and $ F(\alpha), G(\beta) $ similarly. In this notation, the previous calculation can be rewritten as: $ H( \lambda \alpha + (1 - \lambda) \beta) \geq F(\alpha)^{\lambda} G(\beta)^{1 - \lambda} $. Since we have proven this for any fixed $ \alpha, \beta \in \mathbb{R}$, this means that the function $ H,F,G $ satisfy the hypothesis for the one dimensional version of the PL theorem. Thus, we have that $ \int_{\mathbb{R}} H(\gamma) d \gamma \geq ( \int_{\mathbb{R}} F(\alpha) d \alpha)^{\lambda} ( \int_{\mathbb{R}} F(\beta) d \beta)^{1 - \lambda} $, implying the claim by Fubini's theorem. QED
|
90 |
+
|
91 |
+
* PL implies multiplicative BM
|
92 |
+
|
93 |
+
The multiplicative version of Brunn–Minkowski follows from the PL inequality, by taking $ h = 1_{\lambda A + (1 - \lambda) B}, f = 1_A, g = 1_B $.
|
94 |
+
|
95 |
+
* Multiplicative BM implies Additive BM
|
96 |
+
|
97 |
+
We now explain how to derive the BM-inequality from the PL-inequality. First, by using the indicator functions for $ A, B, \lambda A + (1 - \lambda) B$ Prékopa–Leindler inequality quickly gives the multiplicative version of Brunn–Minkowski: <math display= "inline" > \mu ( \lambda A + (1 - \lambda) B) \geq \mu(A)^{\lambda} \mu(B)^{1 - \lambda} </math>. We now show how the multiplicative BM-inequality implies the usual, additive version.
|
98 |
+
|
99 |
+
We assume that both A,B have positive volume, as otherwise the inequality is trivial, and normalize them to have volume 1 by setting $ A' = \frac{ A}{ \mu(A)^{1/n}}, B' = \frac{B}{\mu(B)^{1/n}} $. We define $ \lambda' = \frac{ \lambda \mu(B)^{1/n}}{ (1 - \lambda) \mu(A)^{1/n} + \lambda \mu(B)^{1/n} } $; note that $1 - \lambda' = \frac{ (1 - \lambda) \mu(A)^{1/n}}{ (1 - \lambda) \mu(A)^{1/n} + \lambda \mu(B)^{1/n} } $. With these definitions, and using that $ \mu(A') = \mu(B') = 1 $, we calculate using the multiplicative Brunn–Minkowski inequality that:
|
100 |
+
$$
|
101 |
+
\mu ( \frac { (1 - \lambda) A + \lambda B }{ (1 - \lambda) \mu(A)^{1/n} + \lambda \mu(B)^{1/n} } ) = \mu ( (1 - \lambda') A' + \lambda'B) \geq \mu(A')^{1 - \lambda'} \mu(B')^{\lambda'} = 1.
|
102 |
+
$$
|
103 |
+
|
104 |
+
The additive form of Brunn–Minkowski now follows by pulling the scaling out of the leftmost volume calculation and rearranging.
|
105 |
+
|
106 |
+
The Brunn–Minkowski inequality gives much insight into the geometry of high dimensional convex bodies. In this section we sketch a few of those insights.
|
107 |
+
|
108 |
+
Consider a convex body <math display= "inline" > K \subseteq \mathbb{R}^n </math>. Let $ K(x) = K \cap \{ x_1 = x \} $ be vertical slices of K. Define $ r(x) = \mu(K(x))^{\frac{1}{n-1}} $ to be the radius function; if the slices of K are discs, then r(x) gives the radius of the disc K(x), up to a constant. For more general bodies this radius function does not appear to have a completely clear geometric interpretation beyond being the radius of the disc obtained by packing the volume of the slice as close to the origin as possible; in the case when K(x) is not a disc, the example of a hypercube shows that the average distance to the center of mass can be much larger than r(x). We note that sometimes in the context of a convex geometry, the radius function has a different meaning, here we follow the terminology of .
|
109 |
+
|
110 |
+
By convexity of K, we have that $ K( \lambda x + (1 - \lambda)y ) \supseteq \lambda K(x) + (1 - \lambda) K(y) $. Applying the Brunn–Minkowski inequality gives $r(K( \lambda x + (1 - \lambda)y )) \geq \lambda r ( K(x)) + (1 - \lambda) r ( K(y)) $, provided $ K(x) \not = \emptyset, K(y) \not = \emptyset $. This shows that the radius function is concave on its support, matching the intuition that a convex body does not dip into itself along any direction. This result is sometimes known as Brunn's theorem.
|
111 |
+
|
112 |
+
Again consider a convex body <math display = "inline" > K </math>. Fix some line <math display = "inline" > l </math> and for each <math display = "inline" > t \in l </math> let <math display = "inline" > H_t </math> denote the affine hyperplane orthogonal to <math display = "inline" > l</math> that passes through <math display = "inline" > t </math>. Define, <math display = "inline" > r(t) = Vol( K \cap H_t) </math>; as discussed in the previous section, this function is concave. Now, let <math display = "inline" > K' = \bigcup_{t \in l, K \cap H_t \not = \emptyset } B(t, r(t)) \cap H_t </math>. That is, <math display = "inline" > K' </math> is obtained from <math display = "inline" > K </math> by replacing each slice <math display = "inline" > H_t \cap K </math> with a disc of the same <math display = "inline" > (n-1) </math>-dimensional volume centered <math display = "inline" > l </math> inside of <math display = "inline" > H_t </math>. The concavity of the radius function defined in the previous section implies that that <math display = "inline" > K' </math> is convex. This construction is called the Brunn–Minkowski symmetrization.
|
113 |
+
|
114 |
+
Theorem (Grunbaum's theorem ): Consider a convex body <math > K \subseteq \mathbb{R}^n </math>. Let <math > H </math> be any half-space containing the center of mass of <math > K </math>; that is, the expected location of a uniform point sampled from <math > K. </math> Then $ \mu( H \cap K) \geq (\frac{n}{n+1})^n \mu(K) \geq \frac{1}{e} \mu(K) $.
|
115 |
+
|
116 |
+
Grunbaum's theorem can be proven using Brunn–Minkowski inequality, specifically the convexity of the Brunn–Minkowski symmetrization . See for a proof sketch.
|
117 |
+
|
118 |
+
Grunbaum's inequality has the following fair cake cutting interpretation. Suppose two players are playing a game of cutting up an <math > n </math> dimensional, convex cake. Player 1 chooses a point in the cake, and player two chooses a hyperplane to cut the cake along. Player 1 then receives the cut of the cake containing his point. Grunbaum's theorem implies that if player 1 chooses the center of mass, then the worst that an adversarial player 2 can do is give him a piece of cake with volume at least a $ 1/e $ fraction of the total. In dimensions 2 and 3, the most common dimensions for cakes, the bounds given by the theorem are approximately $ .444, .42 $ respectively. Note, however, that in <math > n </math> dimensions, calculating the centroid is $ \# P $ hard , limiting the usefulness of this cake cutting strategy for higher dimensional, but computationally bounded creatures.
|
119 |
+
|
120 |
+
Applications of Grunbaum's theorem also appear in convex optimization, specifically in analyzing the converge of the center of gravity method. See theorem 2.1 in
|
121 |
+
|
122 |
+
Let $ B = B(0,1) = \{ x \in \mathbb{R}^n : ||x||_2 \leq 1 \} $ denote the unit ball. For a convex body, K, let $ S(K) = \lim_{\epsilon \to 0} \frac{ \mu(K + \epsilon B) - \mu(K)}{\epsilon} $ define its surface area. This agrees with the usual meaning of surface area by the Minkowski-Steiner formula. Consider the function $ c(X) = \frac{ \mu(K)^{1/n} }{ S(K)^{1/(n-1)}} $. The isoperimetric inequality states that this is maximized on Euclidean balls.
|
123 |
+
|
124 |
+
First, observe that Brunn–Minkowski implies $ \mu( K + \epsilon B) \geq ( \mu(K)^{1/n} + \epsilon V(B)^{1/n} )^n = V(K) ( 1 + \epsilon (\frac{ \mu(B)}{\mu(K)})^{1/n})^n \geq \mu(K) ( 1 + n \epsilon (\frac{ \mu(B)}{\mu(K)})^{1/n}), $ where in the last inequality we used that $ (1 + x)^n \geq 1 + nx $ for $ x \geq 0 $. We use this calculation to lower bound the surface area of $ K $ via $ S(K) = \lim_{\epsilon \to 0} \frac{ \mu(K + \epsilon B) - \mu(K)}{\epsilon} \geq n \mu(K) (\frac{ \mu(B)}{\mu(K)})^{1/n}. $ Next, we use the fact that $ S(B) = n \mu(B) $, which follows from the Minkowski-Steiner formula, to calculate $ \frac{ S(K) }{S(B)} = \frac{ S(K) }{n \mu(B) } \geq \frac{ \mu(K) (\frac{ \mu(B)}{\mu(K)})^{1/n} }{\mu(B)} = \mu(K)^{\frac{n-1}{n}} \mu(B)^{\frac{ 1 -n }{n}}. $ Rearranging this yields the isoperimetric inequality: $\frac{ \mu(B)^{1/n} }{ S(B)^{1/(n-1)}} \geq \frac{ \mu(K)^{1/n} }{ S(K)^{1/(n-1)}}. $
|
125 |
+
|
126 |
+
The Brunn–Minkowski inequality can be used to deduce the following inequality $ V(K, \ldots, K, L)^n \geq V(K)^{n-1} V(L) $, where the $ V(K, \ldots, K,L) $ term is a mixed-volume. Equality holds iff K,L are homothetic. (See theorem 3.4.3 in Hug and Weil's course on convex geometry.)
|
127 |
+
|
128 |
+
We recall the following facts about mixed volumes : $ \mu ( \lambda_1 K_1 + \lambda_2 K_2 ) = \sum_{j_1, \ldots, j_n = 1}^r V(K_{j_1}, \ldots, V(K_{j_n}) \lambda_{j_1} \ldots \lambda_{j_n} $, so that in particular if $ g(t) = \mu ( K + tL) = \mu(V) + n V(K, \ldots, K, L) t + \ldots $, then $ g'(0) = n V(K, \ldots, K, L) $.
|
129 |
+
|
130 |
+
Let $ f(t) := \mu( K + tL)^{1/n} $. Brunn's theorem implies that this is concave for $ t \in [0,1] $. Thus, $ f^+(0) \geq f(1) - f(0) = \mu(K + L)^{1/n} - V(K)^{1/n} $, where $ f^+(0) $ denotes the right derivative. We also have that $ f^+(0) = \frac{1}{n} \mu(K)^{ \frac{n - 1}{n}} n V(K,\ldots, K,L) $. From this we get $ \mu(K)^{ \frac{n - 1}{n}} V(K, \ldots, K,L) \geq \mu(K + L)^{1/n} - V(K)^{1/n} \geq V(L)^{1/n}$, where we applied BM in the last inequality.
|
131 |
+
|
132 |
+
We prove the following theorem on concentration of measure, following and . See also Concentration of measure#Concentration on the sphere.
|
133 |
+
|
134 |
+
Theorem: Let $ S $ be the unit sphere in $ \mathbb{R}^n $. Let $ X \subseteq S $. Define $ X_{\epsilon} = \{ z \in S : d(z,X) \leq \epsilon \} $, where d refers to the Euclidean distance in $ \mathbb{R}^n $. Let $ \nu $ denote the surface area on the sphere. Then, for any $ \epsilon \in (0,1 ]$ we have that $ \frac{\nu(X_{\epsilon})}{\nu(S) } \geq 1 - \frac{ \nu(S)}{\nu(X)} e^{ - \frac{n \epsilon^2}{4}} $.
|
135 |
+
|
136 |
+
Proof: Let $ \delta = \epsilon^2 / 8 $, and let $ Y = S \setminus X_{\epsilon} $. Then, for $ x \in X, y \in Y $ one can show, using $ \frac{1}{2} ||x + y||^2 = ||x||^2 + ||y||^2 - \frac{1}{2}||x - y||^2 $ and $ \sqrt{1 - x} \leq 1 - x/2 $ for $ x \leq 1 $, that $ || \frac{ x + y}{2} || \leq 1 - \delta $. In particular, $ \frac{x + y}{2} \in (1 - \delta) B(0,1) $.
|
137 |
+
|
138 |
+
We let $ \overline{X} = \text{Conv}(X, \{0\}), \overline{Y} = \text{Conv}(Y, \{0\}) $, and aim to show that $ \frac{ \overline{X} + \overline{Y} }{2} \subseteq (1 - \delta) B(0,1) $. Let $ x \in X, y \in Y, \alpha, \beta \in [0,1] , \bar{x} = \alpha x, \bar{y} = \alpha y $. The argument below will be symmetric in $ \bar{x}, \bar{y} $, so we assume without loss of generality that $ \alpha \geq \beta $ and set $ \gamma = \beta / \alpha \leq 1 $. Then,
|
139 |
+
$$
|
140 |
+
\frac{ \bar{x} + \bar{y}}{2} = \frac{ \alpha x + \beta y}{2} = \alpha \frac{ x + \gamma y}{2} = \alpha ( \gamma \frac{ x + y}{2} + (1 - \gamma) \frac{x}{2} ) = \alpha \gamma \frac{ x + y}{2} + \alpha ( 1 - \gamma) \frac{x}{2}
|
141 |
+
$$.
|
142 |
+
|
143 |
+
This implies that $ \frac{ \bar{x} + \bar{y}}{2} \in \alpha \gamma (1 - \delta) B + \alpha ( 1 - \gamma) ( 1 - \delta) B = \alpha ( 1 - \delta) B \subseteq (1 - \delta) B $. (Using that for any convex body K and $ \gamma \in [0,1] $, $ \gamma K + (1 - \gamma) K = K $.)
|
144 |
+
|
145 |
+
Thus, we know that $ \frac{ \overline{X} + \overline{Y} }{2} \subseteq (1 - \delta) B(0,1) $, so $ \mu ( \frac{ \overline{X} + \overline{Y} }{2} ) \leq (1 - \delta)^n \mu(B(0,1)) $. We apply the multiplicative form of the Brunn–Minkowski inequality to lower bound the first term by $ \sqrt{ \mu(\bar{X}) \mu( \bar{Y}) } $, giving us $ (1 - \delta)^n \mu(B) \geq \mu(\bar{X})^{1/2} \mu( \bar{Y})^{1/2} $.
|
146 |
+
$$
|
147 |
+
1 - \frac{ \nu(X_{\epsilon}) } { \nu(S)} = \frac{ \nu(Y) }{\nu(S)} = \frac{ \mu( \bar{Y} )}{\mu( B)} \leq ( 1 - \delta)^{2n} \frac{ \mu(B)}{\mu(\bar{X} )} \leq (1 - \delta)^{2n} \frac{\nu(S)}{\nu(X)} \leq e^{ - 2n \delta} \frac{\nu(S)}{\nu(X)} = e^{ - n \epsilon^2 / 4} \frac{\nu(S)}{\nu(X)}
|
148 |
+
$$. QED
|
149 |
+
|
150 |
+
Version of this result hold also for so-called strictly convex surfaces, where the result depends on the modulus of convexity. However, the notion of surface area requires modification, see:
|
151 |
+
|
152 |
+
The proof of the Brunn–Minkowski theorem establishes that the function
|
153 |
+
$$
|
154 |
+
A \mapsto [\mu (A)]^{1/n}
|
155 |
+
$$
|
156 |
+
|
157 |
+
is concave in the sense that, for every pair of nonempty compact subsets A and B of R<sup>n</sup> and every 0 ≤ t ≤ 1,
|
158 |
+
$$
|
159 |
+
\left[ \mu (t A + (1 - t) B ) \right]^{1/n} \geq t [ \mu (A) ]^{1/n} + (1 - t) [ \mu (B) ]^{1/n}.
|
160 |
+
$$
|
161 |
+
|
162 |
+
For convex sets A and B of positive measure, the inequality in the theorem is strict
|
163 |
+
|
164 |
+
for 0 < t < 1 unless A and B are positive homothetic, i.e. are equal up to translation and dilation by a positive factor.
|
165 |
+
|
166 |
+
It is instructive to consider the case where $ A $ an $ l \times l $ square in the plane, and $ B $ a ball of radius $ \epsilon $. In this case, $ A + B $ is a rounded square, and its volume can be accounted for as the four rounded quarter circles of radius $ \epsilon $, the four rectangles of dimensions $ l \times \epsilon $ along the sides, and the original square. Thus, $ \mu( A + B) = l^2 + 4 \epsilon l + \frac{4}{4} \pi \epsilon^2 = \mu(A) + 4 \epsilon l + \mu(B) \geq \mu(A) + 2 \sqrt{\pi} \epsilon l + \mu(B) = \mu(A) + 2 \sqrt{ \mu(A) \mu(B) } + \mu(B) = ( \mu(A)^{1/2} + \mu(B)^{1/2})^2 $.
|
167 |
+
|
168 |
+
This example also hints at the theory of mixed-volumes, since the terms that appear in the expansion of the volume of $ A + B $ correspond to the differently dimensional pieces of A. In particular, if we rewrite Brunn–Minkowski as $ \mu( A + B) \geq ( \mu(A)^{1/n} + \mu(B)^{1/n})^n $, we see that we can think of the cross terms of the binomial expansion of the latter as accounting, in some fashion, for the mixed volume representation of $ \mu(A + B) = V(A, \ldots, A) + n V(B, A, \ldots, A) + \ldots + {n \choose j} V(B,\ldots, B, A,\ldots,A) + \ldots n V(B,\ldots, B,A) + \mu(B) $. This same phenomenon can also be seen for the sum of an n-dimensional $ l \times l $ box and a ball of radius $ \epsilon $, where the cross terms in $ ( \mu(A)^{1/n} + \mu(B)^{1/n})^n $, up to constants, account for the mixed volumes. This is made precise for the first mixed volume in the section above on the applications to mixed volumes.
|
169 |
+
|
170 |
+
The left-hand side of the BM inequality can in general be much larger than the right side. For instance, we can take X to be the x-axis, and Y the y-axis inside the plane; then each has measure zero but the sum has infinite measure. Another example is given by the Cantor set. If $ C $ denotes the middle third Cantor set, then it is an exercise in analysis to show that $ C + C = [0,2] $.
|
171 |
+
|
172 |
+
The Brunn–Minkowski inequality continues to be relevant to modern geometry and algebra. For instance, there are connections to algebraic geometry, and combinatorial versions about counting sets of points inside the integer lattice.
|
wiki/wikipedia/1022.txt
ADDED
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of $n$. It is named after James Stirling, though it was first stated by Abraham de Moivre.
|
2 |
+
|
3 |
+
The version of the formula typically used in applications is
|
4 |
+
|
5 |
+
<math display=block>\ln(n!) = n\ln n - n +\Theta(\ln n)</math>
|
6 |
+
|
7 |
+
(in Big Theta notation, as $n\to\infty$), or, by changing the base of the logarithm (for instance in the worst-case lower bound for comparison sorting),
|
8 |
+
|
9 |
+
<math display=block>\log_2 (n!) = n\log_2 n - n\log_2 e +\Theta(\log_2 n).</math> Specifying the constant in the O(ln n) error term gives 1/2ln(2πn), yielding the more precise formula:
|
10 |
+
|
11 |
+
<math display=block>n! \sim \sqrt{2 \pi n}\left(\frac{n}{e}\right)^n,</math>
|
12 |
+
|
13 |
+
where the sign ~ means that the two quantities are asymptotic: their ratio tends to 1 as $n$ tends to infinity. The following version of the bound holds for all $n \ge 1$, rather than only asymptotically:
|
14 |
+
|
15 |
+
<math display=block>\sqrt{2 \pi n}\ \left(\frac{n}{e}\right)^n e^{\frac{1}{12n + 1}} < n! < \sqrt{2 \pi n}\ \left(\frac{n}{e}\right)^n e^{\frac{1}{12n}}. </math>
|
16 |
+
|
17 |
+
Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum
|
18 |
+
|
19 |
+
<math display=block>\ln(n!) = \sum_{j=1}^n \ln j</math>
|
20 |
+
|
21 |
+
with an integral:
|
22 |
+
|
23 |
+
<math display=block>\sum_{j=1}^n \ln j \approx \int_1^n \ln x {\rm d}x = n\ln n - n + 1.</math>
|
24 |
+
|
25 |
+
The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating $n!$, one considers its natural logarithm, as this is a slowly varying function:
|
26 |
+
|
27 |
+
<math display=block>\ln(n!) = \ln 1 + \ln 2 + \cdots + \ln n.</math>
|
28 |
+
|
29 |
+
The right-hand side of this equation minus
|
30 |
+
|
31 |
+
<math display=block>\tfrac{1}{2}(\ln 1 + \ln n) = \tfrac{1}{2}\ln n</math>
|
32 |
+
|
33 |
+
is the approximation by the trapezoid rule of the integral
|
34 |
+
|
35 |
+
<math display=block>\ln(n!) - \tfrac{1}{2}\ln n \approx \int_1^n \ln x{\rm d}x = n \ln n - n + 1,</math>
|
36 |
+
|
37 |
+
and the error in this approximation is given by the Euler–Maclaurin formula:
|
38 |
+
|
39 |
+
<math display=block>\begin{align}
|
40 |
+
|
41 |
+
\ln(n!) - \tfrac{1}{2}\ln n & = \tfrac{1}{2}\ln 1 + \ln 2 + \ln 3 + \cdots + \ln(n-1) + \tfrac{1}{2}\ln n\\
|
42 |
+
|
43 |
+
& = n \ln n - n + 1 + \sum_{k=2}^{m} \frac{(-1)^k B_k}{k(k-1)} \left( \frac{1}{n^{k-1}} - 1 \right) + R_{m,n},
|
44 |
+
|
45 |
+
\end{align}</math>
|
46 |
+
|
47 |
+
where $B_k$ is a Bernoulli number, and R<sub>m,n</sub> is the remainder term in the Euler–Maclaurin formula. Take limits to find that
|
48 |
+
|
49 |
+
<math display=block>\lim_{n \to \infty} \left( \ln(n!) - n \ln n + n - \tfrac{1}{2}\ln n \right) = 1 - \sum_{k=2}^{m} \frac{(-1)^k B_k}{k(k-1)} + \lim_{n \to \infty} R_{m,n}.</math>
|
50 |
+
|
51 |
+
Denote this limit as $y$. Because the remainder R<sub>m,n</sub> in the Euler–Maclaurin formula satisfies
|
52 |
+
|
53 |
+
<math display=block>R_{m,n} = \lim_{n \to \infty} R_{m,n} + O \left( \frac{1}{n^m} \right),</math>
|
54 |
+
|
55 |
+
where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form:
|
56 |
+
|
57 |
+
<math display=block>\ln(n!) = n \ln \left( \frac{n}{e} \right) + \tfrac{1}{2}\ln n + y + \sum_{k=2}^{m} \frac{(-1)^k B_k}{k(k-1)n^{k-1}} + O \left( \frac{1}{n^m} \right).</math>
|
58 |
+
|
59 |
+
Taking the exponential of both sides and choosing any positive integer $m$, one obtains a formula involving an unknown quantity $e^y$. For m = 1, the formula is
|
60 |
+
|
61 |
+
<math display=block>n! = e^y \sqrt{n} \left( \frac{n}{e} \right)^n \left( 1 + O \left( \frac{1}{n} \right) \right).</math>
|
62 |
+
|
63 |
+
The quantity $e^y$ can be found by taking the limit on both sides as $n$ tends to infinity and using Wallis' product, which shows that $e^y=\sqrt{2\pi}$. Therefore, one obtains Stirling's formula:
|
64 |
+
|
65 |
+
<math display=block>n! = \sqrt{2 \pi n} \left( \frac{n}{e} \right)^n \left( 1 + O \left( \frac{1}{n} \right) \right).</math>
|
66 |
+
|
67 |
+
An alternative formula for $n!$ using the gamma function is
|
68 |
+
|
69 |
+
<math display=block> n! = \int_0^\infty x^n e^{-x}{\rm d}x.</math>
|
70 |
+
|
71 |
+
(as can be seen by repeated integration by parts). Rewriting and changing variables x = ny, one obtains
|
72 |
+
|
73 |
+
<math display=block> n! = \int_0^\infty e^{n\ln x-x}{\rm d}x = e^{n \ln n} n \int_0^\infty e^{n(\ln y -y)}{\rm d}y.</math>
|
74 |
+
|
75 |
+
Applying Laplace's method one has
|
76 |
+
|
77 |
+
<math display=block>\int_0^\infty e^{n(\ln y -y)}{\rm d}y \sim \sqrt{\frac{2\pi}{n}} e^{-n},</math>
|
78 |
+
|
79 |
+
which recovers Stirling's formula:
|
80 |
+
|
81 |
+
<math display=block>n! \sim e^{n \ln n} n \sqrt{\frac{2\pi}{n}} e^{-n}
|
82 |
+
|
83 |
+
= \sqrt{2\pi n}\left(\frac{n}{e}\right)^n.
|
84 |
+
|
85 |
+
</math>
|
86 |
+
|
87 |
+
In fact, further corrections can also be obtained using Laplace's method. For example, computing two-order expansion using Laplace's method yields (using little-o notation)
|
88 |
+
|
89 |
+
<math display=block>\int_0^\infty e^{n(\ln y-y)}{\rm d}y = \sqrt{\frac{2\pi}{n}} e^{-n}
|
90 |
+
|
91 |
+
\left(1+\frac{1}{12 n}+o\left(\frac{1}{n}\right)\right)</math>
|
92 |
+
|
93 |
+
and gives Stirling's formula to two orders:
|
94 |
+
|
95 |
+
<math display=block> n! = \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \left(1 + \frac{1}{12 n}+o\left(\frac{1}{n}\right) \right).
|
96 |
+
|
97 |
+
</math>
|
98 |
+
|
99 |
+
A complex-analysis version of this method is to consider $\frac{1}{n!}$ as a Taylor coefficient of the exponential function $e^z = \sum_{n=0}^\infty \frac{z^n}{n!}$, computed by Cauchy's integral formula as
|
100 |
+
|
101 |
+
<math display=block>\frac{1}{n!} = \frac{1}{2\pi i} \oint\limits_{2N(2N-1)|z|^{2N-1}}
|
102 |
+
|
103 |
+
\begin{cases}
|
104 |
+
|
105 |
+
1 & \text{ if } |\arg z| \leq \frac{\pi}{4}, \\
|
106 |
+
|
107 |
+
|\csc(\arg z)| & \text{ if } \frac{\pi}{4}<|\arg z| < \frac{\pi}{2}, \\
|
108 |
+
|
109 |
+
\sec^{2N}\left(\tfrac{\arg z}{2}\right) & \text{ if } |\arg z| < \pi,
|
110 |
+
|
111 |
+
\end{cases} \\[6pt]
|
112 |
+
|
113 |
+
\left |\widetilde{R}_N(z) \right | &\le
|
114 |
+
|
115 |
+
\left(\frac{\left |a_N \right |}{(k + 1)(k + 2)},</math>
|
116 |
+
|
117 |
+
where s(n, k) denotes the Stirling numbers of the first kind. From this one obtains a version of Stirling's series
|
118 |
+
|
119 |
+
<math display=block>\begin{align}
|
120 |
+
|
121 |
+
\ln\Gamma(x) &= x\ln x - x + \tfrac12\ln\frac{2\pi}{x} + \frac{1}{12(x+1)} + \frac{1}{12(x+1)(x+2)} + \\
|
122 |
+
|
123 |
+
&\quad + \frac{59}{360(x+1)(x+2)(x+3)} + \frac{29}{60(x+1)(x+2)(x+3)(x+4)} + \cdots,
|
124 |
+
|
125 |
+
\end{align}</math>
|
126 |
+
|
127 |
+
which converges when Re(x) > 0.
|
128 |
+
|
129 |
+
The approximation
|
130 |
+
|
131 |
+
<math display=block>\Gamma(z) \approx \sqrt{\frac{2 \pi}{z}} \left(\frac{z}{e} \sqrt{z \sinh\frac{1}{z} + \frac{1}{810z^6} } \right)^z</math>
|
132 |
+
|
133 |
+
and its equivalent form
|
134 |
+
|
135 |
+
<math display=block>2\ln\Gamma(z) \approx \ln(2\pi) - \ln z + z \left(2\ln z + \ln\left(z\sinh\frac{1}{z} + \frac{1}{810z^6}\right) - 2\right)</math>
|
136 |
+
|
137 |
+
can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for z with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.
|
138 |
+
|
139 |
+
Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:
|
140 |
+
|
141 |
+
<math display=block>\Gamma(z) \approx \sqrt{\frac{2\pi}{z} } \left(\frac{1}{e} \left(z + \frac{1}{12z - \frac{1}{10z}}\right)\right)^z,</math>
|
142 |
+
|
143 |
+
or equivalently,
|
144 |
+
|
145 |
+
<math display=block> \ln\Gamma(z) \approx \tfrac{1}{2} \left(\ln(2\pi) - \ln z\right) + z\left(\ln\left(z + \frac{1}{12z - \frac{1}{10z}}\right) - 1\right). </math>
|
146 |
+
|
147 |
+
An alternative approximation for the gamma function stated by Srinivasa Ramanujan (Ramanujan 1988) is
|
148 |
+
|
149 |
+
<math display=block>\Gamma(1+x) \approx \sqrt{\pi} \left(\frac{x}{e}\right)^x \left( 8x^3 + 4x^2 + x + \frac{1}{30} \right)^{\frac{1}{6}}</math>
|
150 |
+
|
151 |
+
for x ≥ 0. The equivalent approximation for ln n! has an asymptotic error of 1/1400n<sup>3</sup> and is given by
|
152 |
+
|
153 |
+
<math display=block>\ln n! \approx n\ln n - n + \tfrac{1}{6}\ln(8n^3 + 4n^2 + n + \tfrac{1}{30}) + \tfrac{1}{2}\ln\pi .</math>
|
154 |
+
|
155 |
+
The approximation may be made precise by giving paired upper and lower bounds; one such inequality is
|
156 |
+
|
157 |
+
<math display=block> \sqrt{\pi} \left(\frac{x}{e}\right)^x \left( 8x^3 + 4x^2 + x + \frac{1}{100} \right)^{1/6} < \Gamma(1+x) < \sqrt{\pi} \left(\frac{x}{e}\right)^x \left( 8x^3 + 4x^2 + x + \frac{1}{30} \right)^{1/6}.</math>
|
158 |
+
|
159 |
+
In computer science, especially in the context of randomized algorithms, it is common to generate random bit vectors that are powers of two in length. Many algorithms producing and consuming these bit vectors are sensitive to the population count of the bit vectors generated, or of the Manhattan distance between two such vectors. Often of particular interest is the density of "fair" vectors, where the population count of an n-bit vector is exactly $n/2$. This amounts to the probability that an iterated coin toss over many trials leads to a tie game.
|
160 |
+
|
161 |
+
Stirling's approximation to ${n \choose n/2}$, the central and maximal binomial coefficient of the binomial distribution, simplifies especially nicely where $n$ takes the form of $4^k$, for an integer $k$. Here we are interested in how the density of the central population count is diminished compared to $2^n$, deriving the last form in decibel attenuation:
|
162 |
+
|
163 |
+
<math display=block>\begin{align}
|
164 |
+
|
165 |
+
\log_2 {n \choose n/2} - n & = -k - \frac{\log_2(\pi)-1}{2} +O(\log_2 n)\\
|
166 |
+
|
167 |
+
& \approx -k - 0.3257481 \\
|
168 |
+
|
169 |
+
& \approx -k -\frac13 \\
|
170 |
+
|
171 |
+
& \approx \mathbf {3k+1} ~~ \mathrm{dB}~(\text{attenuation})
|
172 |
+
|
173 |
+
\end{align}</math>
|
174 |
+
|
175 |
+
This simple approximation exhibits surprising accuracy:
|
176 |
+
|
177 |
+
<math display=block>\begin{align}
|
178 |
+
|
179 |
+
10\log_{10}(2^{-1024} {1024 \choose 512}) &\approx -16.033159
|
180 |
+
|
181 |
+
~~~\begin{cases}
|
182 |
+
|
183 |
+
k &= 5 \\
|
184 |
+
|
185 |
+
n = 4^k &= 1024 \\
|
186 |
+
|
187 |
+
3 k + 1 &= \mathbf {16} \\
|
188 |
+
|
189 |
+
\end{cases} \\
|
190 |
+
|
191 |
+
10\log_{10}(2^{-1048576} {1048576 \choose 524288}) &\approx -31.083600
|
192 |
+
|
193 |
+
~~~\begin{cases}
|
194 |
+
|
195 |
+
k &= 10 \\
|
196 |
+
|
197 |
+
n = 4^k &= 1048576 \\
|
198 |
+
|
199 |
+
3 k + 1 &= \mathbf {31} \\
|
200 |
+
|
201 |
+
\end{cases}
|
202 |
+
|
203 |
+
\end{align}</math>
|
204 |
+
|
205 |
+
Binary diminishment obtains from dB on dividing by $10\log(2)/\log(10) \approx 3.0103 \approx 3$.
|
206 |
+
|
207 |
+
As a direct fractional estimate:
|
208 |
+
|
209 |
+
<math display=block>\begin{align}
|
210 |
+
|
211 |
+
{n \choose n/2}/2^n & = 2^{\frac{1-\log_2(\pi)}{2}-k} +O(n) \\
|
212 |
+
|
213 |
+
& \approx \sqrt{\frac{2}{\pi}} ~ 2^{-k} \\
|
214 |
+
|
215 |
+
& \approx 0.7978846 ~ 2^{-k} \\
|
216 |
+
|
217 |
+
& \approx \mathbf {\frac{4}{5} 2^{-k}}
|
218 |
+
|
219 |
+
\end{align}</math>
|
220 |
+
|
221 |
+
Once again, both examples exhibit accuracy easily besting 1%:
|
222 |
+
|
223 |
+
<math display=block>\begin{align}
|
224 |
+
|
225 |
+
{256 \choose 128} 2^{-256} &\approx 20.072619^{-1}
|
226 |
+
|
227 |
+
~~~\begin{cases}
|
228 |
+
|
229 |
+
k &= 4 \\
|
230 |
+
|
231 |
+
n = 4^k &= 256 \\
|
232 |
+
|
233 |
+
\frac{4}{5} \times \frac{1}{2^4} &= \mathbf {20}^{-1} \\
|
234 |
+
|
235 |
+
\end{cases} \\
|
236 |
+
|
237 |
+
{1048576 \choose 524288} 2^{-1048576} &\approx 1283.3940^{-1}
|
238 |
+
|
239 |
+
~~~\begin{cases}
|
240 |
+
|
241 |
+
k &= 10 \\
|
242 |
+
|
243 |
+
n = 4^k &= 1048576 \\
|
244 |
+
|
245 |
+
\frac{4}{5} \times \frac{1}{2^{10}} &= \mathbf {1280}^{-1}
|
246 |
+
|
247 |
+
\end{cases}
|
248 |
+
|
249 |
+
\end{align}</math>
|
250 |
+
|
251 |
+
Interpreted at an iterated coin toss, a session involving slightly over a million coin flips (a binary million) has one chance in roughly 1300 of ending in a draw.
|
252 |
+
|
253 |
+
Both of these approximations (one in log space, the other in linear space) are simple enough for many software developers to obtain the estimate mentally, with exceptional accuracy by the standards of mental estimates.
|
254 |
+
|
255 |
+
The binomial distribution closely approximates the normal distribution for large $n$, so these estimates based on Stirling's approximation also relate to the peak value of the probability mass function for large $n$ and $p = 0.5$, as specified for the following distribution: $ \mathcal{N}(np,np(1-p))$.
|
256 |
+
|
257 |
+
The formula was first discovered by Abraham de Moivre in the form
|
258 |
+
|
259 |
+
<math display=block>n! \sim [{\rm constant}] \cdot n^{n+\frac12} e^{-n}.</math>
|
260 |
+
|
261 |
+
De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely $\sqrt{2\pi} $.
|
wiki/wikipedia/1023.txt
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials.
|
2 |
+
|
3 |
+
For trigonometric polynomials, the following was proved by Dunham Jackson:
|
4 |
+
|
5 |
+
Theorem 1: If $f:[0,2\pi]\to \C$ is an $r$ times differentiable periodic function such that
|
6 |
+
$$
|
7 |
+
\left |f^{(r)}(x) \right | \leq 1, \qquad x\in[0,2\pi],
|
8 |
+
$$
|
9 |
+
|
10 |
+
then, for every positive integer $n$, there exists a trigonometric polynomial $T_{n-1}$ of degree at most $n-1$ such that
|
11 |
+
$$
|
12 |
+
\left |f(x) - T_{n-1}(x) \right | \leq \frac{C(r)}{n^r}, \qquad x\in[0,2\pi],
|
13 |
+
$$
|
14 |
+
|
15 |
+
where $C(r)$ depends only on $r$.
|
16 |
+
|
17 |
+
The Akhiezer-Krein-Favard theorem gives the sharp value of $C(r)$ (called the Akhiezer-Krein-Favard constant):
|
18 |
+
$$
|
19 |
+
C(r) = \frac{4}{\pi} \sum_{k=0}^\infty \frac{(-1)^{k(r+1)}}{(2k+1)^{r+1}}~.
|
20 |
+
$$
|
21 |
+
|
22 |
+
Jackson also proved the following generalisation of Theorem 1:
|
23 |
+
|
24 |
+
Theorem 2: One can find a trigonometric polynomial $T_n$ of degree $\le n$ such that
|
25 |
+
$$
|
26 |
+
|f(x) - T_n(x)| \leq \frac{C(r) \omega \left (\frac{1}{n}, f^{(r)} \right )}{n^r}, \qquad x\in[0,2\pi],
|
27 |
+
$$
|
28 |
+
|
29 |
+
where $\omega(\delta, g)$ denotes the modulus of continuity of function $g$ with the step $\delta.$
|
30 |
+
|
31 |
+
An even more general result of four authors can be formulated as the following Jackson theorem.
|
32 |
+
|
33 |
+
Theorem 3: For every natural number $n$, if $f$ is $2\pi$-periodic continuous function, there exists a trigonometric polynomial $T_n$ of degree $\le n$ such that
|
34 |
+
$$
|
35 |
+
|f(x)-T_n(x)|\leq c(k)\omega_k\left(\tfrac{1}{n},f\right),\qquad x\in[0,2\pi],
|
36 |
+
$$
|
37 |
+
|
38 |
+
where constant $c(k)$ depends on $k\in\N,$ and $\omega_k$ is the $k$-th order modulus of smoothness.
|
39 |
+
|
40 |
+
For $k=1$ this result was proved by Dunham Jackson. Antoni Zygmund proved the inequality in the case when $k=2, \omega_2(t,f)\le ct, t>0$ in 1945. Naum Akhiezer proved the theorem in the case $k=2$ in 1956. For $k>2$ this result was established by Sergey Stechkin in 1967.
|
41 |
+
|
42 |
+
Generalisations and extensions are called Jackson-type theorems. A converse to Jackson's inequality is given by Bernstein's theorem. See also constructive function theory.
|
wiki/wikipedia/1024.txt
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A or occasionally P_B(A). This can also be understood as the fraction of probability B that intersects with A: $P(A \mid B) = \frac{P(A \cap B)}{P(B)}$.
|
2 |
+
|
3 |
+
For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that P(Cough) = 5% and P(Cough = 75%. Although there is a relationship between A and B in this example, such a relationship or dependence between A and B is not necessary, nor do they have to occur simultaneously.
|
4 |
+
|
5 |
+
P(A may or may not be equal to P(A) (the unconditional probability of A). If P(A, then events A and B are said to be independent: in such a case, knowledge about either event does not alter the likelihood of each other. P(A (the conditional probability of A given B) typically differs from P(B. For example, if a person has dengue fever, the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event B (having dengue) has occurred, the probability of A (tested as positive) given that B occurred is 90%, simply writing P(A = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high false positive rates. In this case, the probability of the event B (having dengue) given that the event A (testing positive) has occurred is 15% or P(B = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through base rate fallacies.
|
6 |
+
|
7 |
+
While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a condition probability using Bayes' theorem: $P(A|B) = {{P(B|A)*P(A)}\over{P(B)}}$. Another option is to display conditional probabilities in conditional probability table to illuminate the relationship between events.
|
8 |
+
|
9 |
+
Given two events A and B from the sigma-field of a probability space, with the unconditional probability of B being greater than zero (i.e., P(B) > 0), the conditional probability of A given B ($P(A \mid B)$) is the probability of A occurring if B has or is assumed to have happened. A is assumed to a set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the quotient of the probability of the joint intersection of events A and B ($P(A \cap B)$) -- the probability at which A and B occur together, although not necessarily occurring at the same time-- and the probability of B: Such
|
10 |
+
$$
|
11 |
+
n
|
12 |
+
$$-bounded partial conditional probability can be defined as the conditionally expected average occurrence of event
|
13 |
+
$$
|
14 |
+
A
|
15 |
+
$$ in testbeds of length
|
16 |
+
$$
|
17 |
+
n
|
18 |
+
$$ that adhere to all of the probability specifications
|
19 |
+
$$
|
20 |
+
B_i \equiv b_i
|
21 |
+
$$, i.e.:
|
22 |
+
|
23 |
+
<math>P^n(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m)=
|
24 |
+
|
25 |
+
\operatorname E(\overline{A}^n\mid\overline{B}^n_1=b_1, \ldots, \overline{B}^n_m=b_m)
|
26 |
+
|
27 |
+
</math> The new information can be incorporated as follows:
|
28 |
+
|
29 |
+
Let Ω be a sample space with elementary events {ω}, and let P be the probability measure with respect to the σ-algebra of Ω. Suppose we are told that the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. All events that are not in B will have null probability in the new distribution. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of P in which the probability of B is one - and every event that is not in B, therefore, has a null probability. Hence, for some scale factor α, the new distribution must satisfy:
|
30 |
+
|
31 |
+
#$\omega \in B : P(\omega\mid B) = \alpha P(\omega)$
|
32 |
+
|
33 |
+
#$\omega \notin B : P(\omega\mid B) = 0$
|
34 |
+
|
35 |
+
#$\sum_{\omega \in \Omega} {P(\omega\mid B)} = 1.$
|
36 |
+
|
37 |
+
Substituting 1 and 2 into 3 to select α:
|
38 |
+
|
39 |
+
<math>\begin{align}
|
40 |
+
|
41 |
+
1 &= \sum_{\omega \in \Omega} {P(\omega \mid B)} \\
|
42 |
+
|
43 |
+
&= \sum_{\omega \in B} {P(\omega\mid B)} + \cancelto{0}{\sum_{\omega \notin B} P(\omega\mid B)} \\
|
44 |
+
|
45 |
+
&= \alpha \sum_{\omega \in B} {P(\omega)} \\[5pt]
|
46 |
+
|
47 |
+
&= \alpha \cdot P(B) \\[5pt]
|
48 |
+
|
49 |
+
\Rightarrow \alpha &= \frac{1}{P(B)}
|
50 |
+
|
51 |
+
\end{align}</math>
|
52 |
+
|
53 |
+
So the new probability distribution is
|
54 |
+
|
55 |
+
#$\omega \in B: P(\omega\mid B) = \frac{P(\omega)}{P(B)}$
|
56 |
+
|
57 |
+
#$\omega \notin B: P(\omega\mid B) = 0$
|
58 |
+
|
59 |
+
Now for a general event A,
|
60 |
+
|
61 |
+
<math>\begin{align}
|
62 |
+
|
63 |
+
P(A\mid B)
|
64 |
+
|
65 |
+
&= \sum_{\omega \in A \cap B} {P(\omega \mid B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega\mid B)} \\
|
66 |
+
|
67 |
+
&= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\[5pt]
|
68 |
+
|
69 |
+
&= \frac{P(A \cap B)}{P(B)}
|
70 |
+
|
71 |
+
\end{align}</math>
|
wiki/wikipedia/1025.txt
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In the field of ordinary differential equations, the Mingarelli identity is a theorem that provides criteria for the oscillation and non-oscillation of solutions of some linear differential equations in the real domain. It extends the Picone identity from two to three or more differential equations of the second order.
|
2 |
+
|
3 |
+
Consider the n solutions of the following (uncoupled) system of second order linear differential equations over the t–interval [a, b]:
|
4 |
+
$$
|
5 |
+
(p_i(t) x_i^\prime)^\prime + q_i(t) x_i = 0, x_i(a)=1, x_i^\prime(a)=R_i
|
6 |
+
$$ where $i=1,2, \ldots, n$.
|
7 |
+
|
8 |
+
Let $\Delta$ denote the forward difference operator, i.e.
|
9 |
+
$$
|
10 |
+
\Delta x_i = x_{i+1}-x_i.
|
11 |
+
$$
|
12 |
+
|
13 |
+
The second order difference operator is found by iterating the first order operator as in
|
14 |
+
$$
|
15 |
+
\Delta^2 (x_i) = \Delta(\Delta x_i) = x_{i+2}-2x_{i+1}+x_{i},
|
16 |
+
$$,
|
17 |
+
|
18 |
+
with a similar definition for the higher iterates. Leaving out the independent variable t for convenience, and assuming the x_i(t) ≠ 0 on (a, b], there holds the identity,
|
19 |
+
|
20 |
+
<math>
|
21 |
+
|
22 |
+
\begin{align}
|
23 |
+
|
24 |
+
x_{n-1}^2\Delta^{n-1}(p_1r_1)]_a^b & = \int_a^b (x^\prime_{n-1})^2 \Delta^{n-1}(p_1) - \int_a^b x_{n-1}^2 \Delta^{n-1}(q_1)
|
25 |
+
|
26 |
+
- \sum_{k=0}^{n-1} C(n-1,k)(-1)^{n-k-1}\int_a^b p_{k+1} W^2(x_{k+1},x_{n-1})/x_{k+1}^2,
|
27 |
+
|
28 |
+
\end{align}
|
29 |
+
|
30 |
+
</math>
|
31 |
+
|
32 |
+
where
|
33 |
+
|
34 |
+
*$r_i = x^\prime_i/x_i$ is the logarithmic derivative,
|
35 |
+
|
36 |
+
*$W(x_i, x_j) = x^\prime_ix_j - x_ix^\prime_j$, is the Wronskian determinant,
|
37 |
+
|
38 |
+
*$C(n-1,k)$ are binomial coefficients.
|
39 |
+
|
40 |
+
When n = 2 this equality reduces to the Picone identity.
|
41 |
+
|
42 |
+
The above identity leads quickly to the following comparison theorem for three linear differential equations, which extends the classical Sturm–Picone comparison theorem.
|
43 |
+
|
44 |
+
Let p_i, q_i i = 1, 2, 3, be real-valued continuous functions on the interval [a, b] and let
|
45 |
+
|
46 |
+
#$(p_1(t) x_1^\prime)^\prime + q_1(t) x_1 = 0, x_1(a)=1, x_1^\prime(a)=R_1$
|
47 |
+
|
48 |
+
#$(p_2(t) x_2^\prime)^\prime + q_2(t) x_2 = 0, x_2(a)=1, x_2^\prime(a)=R_2$
|
49 |
+
|
50 |
+
#$(p_3(t) x_3^\prime)^\prime + q_3(t) x_3 = 0, x_3(a)=1, x_3^\prime(a)=R_3$
|
51 |
+
|
52 |
+
be three homogeneous linear second order differential equations in self-adjoint form, where
|
53 |
+
|
54 |
+
*p_i(t) > 0 for each i and for all t in [a, b] , and
|
55 |
+
|
56 |
+
*the R_i are arbitrary real numbers.
|
57 |
+
|
58 |
+
Assume that for all t in [a, b] we have,
|
59 |
+
$$
|
60 |
+
\Delta^2(q_1) \ge 0
|
61 |
+
$$,
|
62 |
+
$$
|
63 |
+
\Delta^2(p_1) \le 0
|
64 |
+
$$,
|
65 |
+
$$
|
66 |
+
\Delta^2(p_1(a)R_1) \le 0
|
67 |
+
$$.
|
68 |
+
|
69 |
+
Then, if x_1(t) > 0 on [a, b] and x_2(b) = 0, then any solution x_3(t) has at least one zero in [a, b].
|
wiki/wikipedia/1026.txt
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In computer science, the shortest common supersequence of two sequences X and Y is the shortest sequence which has X and Y as subsequences. This is a problem closely related to the longest common subsequence problem. Given two sequences X = < x<sub>1</sub>,...,x<sub>m</sub> > and Y = < y<sub>1</sub>,...,y<sub>n</sub> >, a sequence U = < u<sub>1</sub>,...,u<sub>k</sub> > is a common supersequence of X and Y if items can be removed from U to produce X and Y.
|
2 |
+
|
3 |
+
A shortest common supersequence (SCS) is a common supersequence of minimal length. In the shortest common supersequence problem, two sequences X and Y are given, and the task is to find a shortest possible common supersequence of these sequences. In general, an SCS is not unique.
|
4 |
+
|
5 |
+
For two input sequences, an SCS can be formed from a longest common subsequence (LCS) easily. For example, the longest common subsequence of X$[1..m] = abcbdab$ and Y$[1..n] = bdcaba$ is Z$[1..L] = bcba$. By inserting the non-LCS symbols into Z while preserving their original order, we obtain a shortest common supersequence U$[1..S] = abdcabdab$. In particular, the equation $L + S = m + n$ holds for any two input sequences.
|
6 |
+
|
7 |
+
There is no similar relationship between shortest common supersequences and longest common subsequences of three or more input sequences. (In particular, LCS and SCS are not dual problems.) However, both problems can be solved in $O(n^k)$ time using dynamic programming, where $k$ is the number of sequences, and $n$ is their maximum length. For the general case of an arbitrary number of input sequences, the problem is NP-hard.
|
8 |
+
|
9 |
+
The closely related problem of finding a minimum-length string which is a superstring of a finite set of strings S = { s<sub>1</sub>,s<sub>2</sub>,...,s<sub>n</sub> } is also NP-hard. Several constant factor approximations have been proposed throughout the years, and the current best known algorithm has an approximation factor of 2.4783. However, perhaps the simplest solution is to reformulate the problem as an instance of weighted set cover in such a way that the weight of the optimal solution to the set cover instance is less than twice the length of the shortest superstring S. One can then use the O(log(n))-approximation for weighted set-cover to obtain an O(log(n))-approximation for the shortest superstring (note that this is not a constant factor approximation).
|
10 |
+
|
11 |
+
For any string x in this alphabet, define P(x) to be the set of all strings which are substrings of x. The instance I of set cover is formulated as follows:
|
12 |
+
|
13 |
+
* Let M be empty.
|
14 |
+
|
15 |
+
* For each pair of strings s<sub>i</sub> and s<sub>j</sub>, if the last k symbols of s<sub>i</sub> are the same as the first k symbols of s<sub>j</sub>, then add a string to M that consists of the concatenation with maximal overlap of s<sub>i</sub> with s<sub>j</sub>.
|
16 |
+
|
17 |
+
* Define the universe $\mathcal U$ of the set cover instance to be S
|
18 |
+
|
19 |
+
* Define the set of subsets of the universe to be { P(x) | x ∈ S ∪ M }
|
20 |
+
|
21 |
+
* Define the cost of each subset P(x) to be |x|, the length of x.
|
22 |
+
|
23 |
+
The instance I can then be solved using an algorithm for weighted set cover, and the algorithm can output an arbitrary concatenation of the strings x for which the weighted set cover algorithm outputs P(x).
|
24 |
+
|
25 |
+
Consider the set S = { abc, cde, fab }, which becomes the universe of the weighted set cover instance. In this case, M = { abcde, fabc }. Then the set of subsets of the universe is
|
26 |
+
|
27 |
+
<math displaystyle="block">
|
28 |
+
|
29 |
+
\begin{align}
|
30 |
+
|
31 |
+
\{ P(x) | x\in S\cup M \}
|
32 |
+
|
33 |
+
&= \{ P(x) | x\in \{ abc, cde, fab, abcde, fabc \} \} \\
|
34 |
+
|
35 |
+
&= \{ P(abc), P(cde), P(fab), P(abcde), P(fabc) \} \} \\
|
36 |
+
|
37 |
+
&= \{ \{a,b,c,ab,bc,abc\}, \{c,d,e,cd,de,cde\},\ldots, \{f,a,b,c,fa,ab,bc,fab,abc,fabc\} \} \} \\
|
38 |
+
|
39 |
+
\end{align}
|
40 |
+
|
41 |
+
</math>
|
42 |
+
|
43 |
+
which have costs 3, 3, 3, 5, and 4, respectively.
|
wiki/wikipedia/1027.txt
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In securities trading, same-day affirmation (SDA) also known as T0 refers to completing the entire trade verification process on the same day that the actual trade took place, and was invented in the early '90s by James Karat, the inventor of straight-through processing, in London. Trade verification is carried out on the institutional side of the market between the investment manager and the broker/dealer. This process ensures that the parties are in agreement about the essential trade details.
|
2 |
+
|
3 |
+
The three key steps in the verification process that Karat created are:
|
4 |
+
|
5 |
+
# Notice of execution by the broker/dealer. “CONFIRMATION”
|
6 |
+
|
7 |
+
# Affirmation or Rejection by the client of the transaction details. “AFFIRMATION” or "REJECTION"
|
8 |
+
|
9 |
+
# Transmission of allocation details by the investment manager (the splits). “ALLOCATION”
|
10 |
+
|
11 |
+
The trade verification process concludes when the affirmation/allocation has been completed and then the clearing and settlement process begins, which also involves custodians, central securities depositories (CSDs), and other participants in the post-trading value chain. SDA leaves more time for the clearing and settlement processes within the intended settlement period, which in most markets means on the second day after trade execution (known as "T+2").
|
12 |
+
|
13 |
+
A market where SDA is the standard is also referred to as a "trade-date environment". This is seen as a critical enabler to achieving shortened settlement cycles, an objective the European Commission is driving through its , and about which the United States has begun discussions as well, propelled in part by research commissioned by the Depository Trust & Clearing Corporation in 2012.
|
14 |
+
|
15 |
+
Under manual verification, the allocation, confirmation and affirmation procedures are conducted sequentially between the investment manager and broker/dealer. There is no involvement of any further intermediary and communication is usually via telephone, fax or email.
|
16 |
+
|
17 |
+
With manual trade the counterparties respond to each other’s messages and the relevant data needs to be checked and re-keyed manually. There is a strict sequence of steps; each party must wait for the other to complete its actions before proceeding. Only once all the steps in the trade verification process are completed will the settlement instructions be sent and the next stages of begin.
|
18 |
+
|
19 |
+
The verification process can be automated in full or in part. For example, where confirmation/affirmation is automated but allocation instructions are sent by fax or email. The process is locally matched and is conducted directly between the broker/dealer and the investment manager through an electronic system also known as an electronic ETC system, which can be either provided by third-party vendors or developed by the parties themselves.
|
20 |
+
|
21 |
+
Under central matching models, the process is fully automated and centralized using a central matching utility, which is usually provided by third-party vendors. Unlike the local matching models, where trade verification is conducted bilaterally and relies on traditional message flows in which trade information is provided in a set order, thus adding time to trade settlement, central matching allows investment managers and broker/dealers to input the data independently and separately into the centralized matching utility, where the information is then centrally validated and matched.
|
22 |
+
|
23 |
+
The trade verification process can range from fully manual procedures that follow a strict sequencing of steps to full automation where trade details are centrally matched and validated and the processes do not necessarily happen sequentially.
|
24 |
+
|
25 |
+
In practice, where the investment manager is not automated, broker/dealers will often not wait for the affirmation from the investment manager before notifying their settlement agents and submitting their settlement instructions. In this case, settlement instructions are sent on the basis of trade details that have not been affirmed and thus risk being incorrect.
|
26 |
+
|
27 |
+
SDA is unlikely under manual processes because there will be time lags and delays to completion of trade verification beyond the trade date, especially for significant trade volumes and where there are resource constraints.
|
28 |
+
|
29 |
+
At the other end of the spectrum, central matching removes much of the sequentially in the trade verification process because the counterparties involved input the relevant data independently and separately. The information is then validated and matched centrally and to a large extent synchronously. When the details match, settlement instructions are automatically sent to custodians and settlement agents. What is more, the counterparties receive updates on the status of trades processed through the system, with errors (and the need to take corrective action) being indicated if trades do not match.
|
30 |
+
|
31 |
+
The verification of the trade details between investment manager and broker/dealer is a key activity along the trading and post-trade process, taking place after the trade is executed and before it can be cleared and settled. Automated trade verification (using electronic systems to match the trade details either locally or centrally) provides a means to achieve timely trade verification. Automation assists timely completion of the process for the bulk of the trades that can be sent straight through for settlement, allowing resources to be focused on those trades where manual intervention is required to rectify any errors identified. While automation does not guarantee SDA for all trades, it is a precondition for achieving high rates of SDA.
|
32 |
+
|
33 |
+
SDA leads to settlement efficiency: Settlement efficiency in countries with SDA rates of over 90 percent—India, Taiwan, Hong Kong, Japan, Singapore and Korea—is 26 percent higher than in countries with SDA scores of less than 70 percent—Brazil, Italy, South Africa and the United States. Automation of the trade verification process can deliver SDA through improved trade processing times and eliminate errors inherent in manual processing by removing the requirement to send information back and forth manually between broker/dealer and investment manager. This translates into benefits in the form of a reduction in operational risk and trade failure rates for a given level of operating costs, and a reduction in operating costs for a given risk and failure rate.
|
34 |
+
|
35 |
+
Reduced risk through better accuracy in the trade verification process — the adoption of automated SDA processes reduces the rate at which trade fails occur and mitigates the costs associated with these fails. It does this by making it easier for the investment manager or broker/dealer to identify errors or mismatches in the trade details which, if not corrected up front, could result in the trade failing to settle on time. Furthermore, compared to manual processing, automation will reduce the likelihood of new errors being introduced during the post-trade processes.
|
36 |
+
|
37 |
+
Estimates show that failed trades put as much as US$976 billion in equity transactions and $308 billion in fixed income transactions at risk annually. A reduction in the risk of trade fails implies less time spent on preventing or following up potential or actual fails. Fewer fails mean fewer costs downstream in record-keeping, reconciliations of settlement instructions, corporate actions, claims-handling and other functions required to resolve fails. Therefore, some of the operating cost efficiencies will be passed along the value chain and benefit other parties, not just the investment manager and broker/dealer.
|
38 |
+
|
39 |
+
In addition to the direct benefits of risk and cost reductions, automated SDA processes can generate indirect benefits. These relate to better management of information, increased transparency, and improved monitoring of own positions and performance as well as counterparty performance. Furthermore, it provides a key step towards achieving full straight-through processing (STP) of trades from order to settlement, with additional risk and cost reduction implications.
|
40 |
+
|
41 |
+
If the relevant data is confirmed/affirmed and available on the trade date, records and accounts are more likely to be accurate, and valuations can be conducted in a more effective and timely manner.
|
42 |
+
|
43 |
+
Transparency is improved because the information on trades arrives at one point of entry and is electronically stored, which means that it can be more readily accessed and tracked than communications by email, fax or telephone.
|
44 |
+
|
45 |
+
The electronic storage of relevant trade information, including the history of a trade such as any actions taken by the counterparties to rectify unmatched trades, is likely to improve transparency in the process by leaving an audit trail. It also allows individual firms to track and measure their operational performance and trade processing efficiency such as average response times for allocations, confirmations and affirmations.
|
46 |
+
|
47 |
+
Automation of this part of the process provides a bridge between the front and back office, and so can be considered as one of the necessary measures to move towards straight-through processing.
|
48 |
+
|
49 |
+
The risk reductions and cost efficiencies that can be realized by an individual firm are likely to be more feasible with a market-wide move towards automation and SDA as best operational practice, because this will go further to shorten and harmonize the settlement process.
|
50 |
+
|
51 |
+
Both sides (investment managers and broker/dealers) benefit from the adoption of automation of their counterparts. For example, broker/dealers need their existing (as well as potential) clients to adopt automation in order to reorganize their activities in such a way that fully captures the benefits of automation. If some existing (or potential) clients do not adopt automation, the brokerage firm will still have to organize its operations in order to meet the requirements of its non-automated clients. At present, many investment managers and broker/dealers that have switched to an automated solution find it difficult to benefit from it fully due to the lack of automation of their counterparties. The risk reductions and cost efficiencies that can be realized at individual or bilateral level would therefore be likely to deliver greater overall benefits if more, and ideally all, firms in a given market were to adopt automated processes based on standardized or interoperable systems.
|
52 |
+
|
53 |
+
The degree to which firms in a market use automated trade verification and achieve SDA has further implications in terms of the market-wide benefits that can be realized. In fact, some potential benefits can be extracted only if there is a market-wide move towards automated SDA (at least within that market).
|
54 |
+
|
55 |
+
In some instances, automation and the move towards SDA as best operational practice delivers most benefits if it is adopted not just by most firms within a country, but across the whole relevant economic region. For example, harmonization of settlement practices between EU countries can arguably be achieved more easily in an environment where firms in individual countries have adopted more consistent and efficient verification processes.
|
56 |
+
|
57 |
+
From a wider perspective, these benefits from reductions in the risks and costs borne by investment managers and broker/dealers (or other intermediaries), once these benefits have been realised by a significant part of the market, would be passed through and be reflected in lower prices, resulting in lower transaction costs for end investors and producing associated beneficial effects on liquidity and operation of markets.
|
wiki/wikipedia/1028.txt
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Pokhozhaev's identity is an integral relation satisfied by stationary localized solutions to a nonlinear Schrödinger equation or nonlinear Klein-Gordon equation. It was obtained by S.I. Pokhozhaev and is similar to the virial theorem. This relation is also known as D.H. Derrick's theorem. Similar identities can be derived for other equations of mathematical physics.
|
2 |
+
|
3 |
+
Here is a general form due to H. Berestycki and P.-L. Lions.
|
4 |
+
|
5 |
+
Let $g(s)$ be continuous and real-valued, with $g(0)=0$.
|
6 |
+
|
7 |
+
Denote $G(s)=\int_0^s g(t)dt$.
|
8 |
+
|
9 |
+
Let
|
10 |
+
|
11 |
+
<math>u\in L^\infty_{\mathrm{loc}}(\R^n),
|
12 |
+
|
13 |
+
\qquad
|
14 |
+
|
15 |
+
\nabla u\in L^2(\R^n),
|
16 |
+
|
17 |
+
\qquad
|
18 |
+
|
19 |
+
G(u)\in L^1(\R^n),
|
20 |
+
|
21 |
+
\qquad
|
22 |
+
|
23 |
+
n\in\N,
|
24 |
+
|
25 |
+
</math>
|
26 |
+
|
27 |
+
be a solution to the equation
|
28 |
+
$$
|
29 |
+
-\nabla^2 u=g(u)
|
30 |
+
$$,
|
31 |
+
|
32 |
+
in the sense of distributions.
|
33 |
+
|
34 |
+
Then $u$ satisfies the relation
|
35 |
+
$$
|
36 |
+
(n-2)\int_{\R^n}|\nabla u(x)|^2dx=n\int_{\R^n}G(u(x))dx.
|
37 |
+
$$
|
38 |
+
|
39 |
+
Let $n\in\N,N\in\N$
|
40 |
+
|
41 |
+
and let $\alpha^i,1\le i\le n$ and $\beta$ be the self-adjoint Dirac matrices of size $N\times N$:
|
42 |
+
|
43 |
+
<math>
|
44 |
+
|
45 |
+
\alpha^i\alpha^j+\alpha^j\alpha^i=2\delta_{ij}I_N,
|
46 |
+
|
47 |
+
\quad
|
48 |
+
|
49 |
+
\beta^2=I_N,
|
50 |
+
|
51 |
+
\quad
|
52 |
+
|
53 |
+
\alpha^i\beta+\beta\alpha^i=0,
|
54 |
+
|
55 |
+
\quad
|
56 |
+
|
57 |
+
1\le i,j\le n.
|
58 |
+
|
59 |
+
</math>
|
60 |
+
|
61 |
+
Let $D_0=-\mathrm{i}\alpha\cdot\nabla=-\mathrm{i}\sum_{i=1}^n\alpha^i\frac{\partial}{\partial x^i}$ be the massless Dirac operator.
|
62 |
+
|
63 |
+
Let $g(s)$ be continuous and real-valued, with $g(0)=0$.
|
64 |
+
|
65 |
+
Denote $G(s)=\int_0^s g(t)dt$.
|
66 |
+
|
67 |
+
Let $\phi\in L^\infty_{\mathrm{loc}}(\R^n,\C^N)$ be a spinor-valued solution that satisfies the stationary form of the nonlinear Dirac equation,
|
68 |
+
|
69 |
+
<math>
|
70 |
+
|
71 |
+
\omega\phi=D_0\phi+g(\phi^\ast\beta\phi)\beta\phi,
|
72 |
+
|
73 |
+
</math>
|
74 |
+
|
75 |
+
in the sense of distributions,
|
76 |
+
|
77 |
+
with some $\omega\in\R$.
|
78 |
+
|
79 |
+
Assume that
|
80 |
+
|
81 |
+
<math>
|
82 |
+
|
83 |
+
\phi\in H^1(\R^n,\C^N),\qquad
|
84 |
+
|
85 |
+
G(\phi^\ast\beta\phi)\in L^1(\R^n).
|
86 |
+
|
87 |
+
</math>
|
88 |
+
|
89 |
+
Then $\phi$ satisfies the relation
|
90 |
+
|
91 |
+
<math>
|
92 |
+
|
93 |
+
\omega\int_{\R^n}\phi(x)^\ast\phi(x)dx
|
94 |
+
|
95 |
+
=\frac{n-1}{n}\int_{\R^n}\phi(x)^\ast D_0\phi(x)dx
|
96 |
+
|
97 |
+
+\int_{\R^n}G(\phi(x)^\ast\beta\phi(x))dx.
|
98 |
+
|
99 |
+
</math>
|
wiki/wikipedia/1029.txt
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In computability theory, computational complexity theory and proof theory, the Hardy hierarchy, named after G. H. Hardy, is an ordinal-indexed family of functions h<sub>α</sub>: N → N (where N is the set of natural numbers, {0, 1, ...}). It is related to the fast-growing hierarchy and slow-growing hierarchy. The hierarchy was first described in Hardy's 1904 paper, "A theorem concerning the infinite cardinal numbers".
|
2 |
+
|
3 |
+
Let μ be a large countable ordinal such that a fundamental sequence is assigned to every limit ordinal less than μ. The Hardy hierarchy of functions h<sub>α</sub>: N → N, for α < μ, is then defined as follows:
|
4 |
+
|
5 |
+
*$ h_0(n) = n,$
|
6 |
+
|
7 |
+
*$ h_{\alpha+1}(n) = h_\alpha(n + 1),$
|
8 |
+
|
9 |
+
*$ h_\alpha(n) = h_{\alpha[n]}(n) $ if α is a limit ordinal.
|
10 |
+
|
11 |
+
Here α[n] denotes the n<sup>th</sup> element of the fundamental sequence assigned to the limit ordinal α. A standardized choice of fundamental sequence for all α ≤ ε<sub>0</sub> is described in the article on the fast-growing hierarchy.
|
12 |
+
|
13 |
+
Caicedo (2007) defines a modified Hardy hierarchy of functions $H_\alpha$ by using the standard fundamental sequences, but with α[n+1] (instead of α[n]) in the third line of the above definition.
|
14 |
+
|
15 |
+
The Wainer hierarchy of functions f<sub>α</sub> and the Hardy hierarchy of functions h<sub>α</sub> are related by f<sub>α</sub> = h<sub>ω<sup>α</sup></sub> for all α < ε<sub>0</sub>. Thus, for any α < ε<sub>0</sub>, h<sub>α</sub> grows much more slowly than does f<sub>α</sub>. However, the Hardy hierarchy "catches up" to the Wainer hierarchy at α = ε<sub>0</sub>, such that f<sub>ε<sub>0</sub></sub> and h<sub>ε<sub>0</sub></sub> have the same growth rate, in the sense that f<sub>ε<sub>0</sub></sub>(n-1) ≤ h<sub>ε<sub>0</sub></sub>(n) ≤ f<sub>ε<sub>0</sub></sub>(n+1) for all n ≥ 1.
|
wiki/wikipedia/103.txt
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. Winning strategies consisting of various combinations of losing strategies were explored in biology before Parrondo's paradox was published. show via simulation that if $M=3$ and $\epsilon = 0.005,$ Game B is an almost surely losing game as well. In fact, Game B is a Markov chain, and an analysis of its state transition matrix (again with M=3) shows that the steady state probability of using coin 2 is 0.3836, and that of using coin 3 is 0.6164. As coin 2 is selected nearly 40% of the time, it has a disproportionate influence on the payoff from Game B, and results in it being a losing game.
|
2 |
+
|
3 |
+
However, when these two losing games are played in some alternating sequence - e.g. two games of A followed by two games of B (AABBAABB...), the combination of the two games is, paradoxically, a winning game. Not all alternating sequences of A and B result in winning games. For example, one game of A followed by one game of B (ABABAB...) is a losing game, while one game of A followed by two games of B (ABBABB...) is a winning game. This coin-tossing example has become the canonical illustration of Parrondo's paradox – two games, both losing when played individually, become a winning game when played in a particular alternating sequence.
|
4 |
+
|
5 |
+
The apparent paradox has been explained using a number of sophisticated approaches, including Markov chains, flashing ratchets, simulated annealing, and information theory. One way to explain the apparent paradox is as follows:
|
6 |
+
|
7 |
+
* While Game B is a losing game under the probability distribution that results for $C_t $ modulo $M$ when it is played individually ($C_t $ modulo $M$ is the remainder when $C_t $ is divided by $M$), it can be a winning game under other distributions, as there is at least one state in which its expectation is positive.
|
8 |
+
|
9 |
+
* As the distribution of outcomes of Game B depend on the player's capital, the two games cannot be independent. If they were, playing them in any sequence would lose as well.
|
10 |
+
|
11 |
+
The role of $M$ now comes into sharp focus. It serves solely to induce a dependence between Games A and B, so that a player is more likely to enter states in which Game B has a positive expectation, allowing it to overcome the losses from Game A. With this understanding, the paradox resolves itself: The individual games are losing only under a distribution that differs from that which is actually encountered when playing the compound game. In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence. A more detailed exposition of this point, along with several related examples, can be found in Philips and Feldman.
|
12 |
+
|
13 |
+
For a simpler example of how and why the paradox works, again consider two games Game A and Game B, this time with the following rules:
|
14 |
+
|
15 |
+
# In Game A, you simply lose $1 every time you play.
|
16 |
+
|
17 |
+
# In Game B, you count how much money you have left — if it is an even number you win $3, otherwise you lose $5.
|
18 |
+
|
19 |
+
Say you begin with $100 in your pocket. If you start playing Game A exclusively, you will obviously lose all your money in 100 rounds. Similarly, if you decide to play Game B exclusively, you will also lose all your money in 100 rounds.
|
20 |
+
|
21 |
+
However, consider playing the games alternatively, starting with Game B, followed by A, then by B, and so on (BABABA...). It should be easy to see that you will steadily earn a total of $2 for every two games.
|
22 |
+
|
23 |
+
Thus, even though each game is a losing proposition if played alone, because the results of Game B are affected by Game A, the sequence in which the games are played can affect how often Game B earns you money, and subsequently the result is different from the case where either game is played by itself.
|
24 |
+
|
25 |
+
Parrondo's paradox is used extensively in game theory, and its application to engineering, population dynamics, financial risk, etc., are areas of active research. Parrondo's games are of little practical use such as for investing in stock markets as the original games require the payoff from at least one of the interacting games to depend on the player's capital. However, the games need not be restricted to their original form and work continues in generalizing the phenomenon. Similarities to volatility pumping and the two envelopes problem have been pointed out. Simple finance textbook models of security returns have been used to prove that individual investments with negative median long-term returns may be easily combined into diversified portfolios with positive median long-term returns. Similarly, a model that is often used to illustrate optimal betting rules has been used to prove that splitting bets between multiple games can turn a negative median long-term return into a positive one. In evolutionary biology, both bacterial random phase variation and the evolution of less accurate sensors have been modelled and explained in terms of the paradox. In ecology, the periodic alternation of certain organisms between nomadic and colonial behaviors has been suggested as a manifestation of the paradox. There has been an interesting application in modelling multicellular survival as a consequence of the paradox and some interesting discussion on the feasibility of it. Applications of Parrondo's paradox can also be found in reliability theory. Interested readers can refer to the three review papers which have been published over the years, with the most recent one examining the Parrondo effect across biology.
|
26 |
+
|
27 |
+
In the early literature on Parrondo's paradox, it was debated whether the word 'paradox' is an appropriate description given that the Parrondo effect can be understood in mathematical terms. The 'paradoxical' effect can be mathematically explained in terms of a convex linear combination.
|
28 |
+
|
29 |
+
However, Derek Abbott, a leading researcher on the topic, provides the following answer regarding the use of the word 'paradox' in this context:
|
wiki/wikipedia/1030.txt
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, Kōmura's theorem is a result on the differentiability of absolutely continuous Banach space-valued functions, and is a substantial generalization of Lebesgue's theorem on the differentiability of the indefinite integral, which is that Φ : [0, T] → R given by
|
2 |
+
$$
|
3 |
+
\Phi(t) = \int_{0}^{t} \varphi(s) \mathrm{d} s,
|
4 |
+
$$
|
5 |
+
|
6 |
+
is differentiable at t for almost every 0 < t < T when φ : [0, T] → R lies in the L<sup>p</sup> space L<sup>1</sup>([0, T]; R).
|
7 |
+
|
8 |
+
Let (X, || ||) be a reflexive Banach space and let φ : [0, T] → X be absolutely continuous. Then φ is (strongly) differentiable almost everywhere, the derivative φ′ lies in the Bochner space L<sup>1</sup>([0, T]; X), and, for all 0 ≤ t ≤ T,
|
9 |
+
$$
|
10 |
+
\varphi(t) = \varphi(0) + \int_{0}^{t} \varphi'(s) \mathrm{d} s.
|
11 |
+
$$
|
wiki/wikipedia/1031.txt
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In the mathematical field of graph theory, the Barnette–Bosák–Lederberg graph is a cubic (that is, 3-regular) polyhedral graph with no Hamiltonian cycle, the smallest such graph possible. It was discovered in the mid-1960s by Joshua Lederberg, David Barnette, and Juraj Bosák, after whom it is named. It has 38 vertices and 69 edges.
|
2 |
+
|
3 |
+
Other larger non-Hamiltonian cubic polyhedral graphs include the 46-vertex Tutte graph and a 44-vertex graph found by Emanuels Grīnbergs using Grinberg's theorem.
|
4 |
+
|
5 |
+
The Barnette–Bosák–Lederberg graph has a similar construction to the Tutte graph but is composed of two Tutte fragments, connected through a pentagonal prism, instead of three connected through a tetrahedron.
|
6 |
+
|
7 |
+
Without the constraint of having exactly three edges at every vertex, much smaller non-Hamiltonian polyhedral graphs are possible, including the Goldner–Harary graph and the Herschel graph.
|
wiki/wikipedia/1032.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
In algebraic geometry, Chow's moving lemma, proved by , states: given algebraic cycles Y, Z on a nonsingular quasi-projective variety X, there is another algebraic cycle Z' on X such that Z' is rationally equivalent to Z and Y and Z' intersect properly. The lemma is one of key ingredients in developing the intersection theory, as it is used to show the uniqueness of the theory.
|
2 |
+
|
3 |
+
Even if Z is an effective cycle, it is not, in general, possible to choose the cycle Z' to be effective.
|
wiki/wikipedia/1033.txt
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, Helly's selection theorem (also called the Helly selection principle) states that a uniformly bounded sequence of monotone real functions admits a convergent subsequence.
|
2 |
+
|
3 |
+
In other words, it is a sequential compactness theorem for the space of uniformly bounded monotone functions.
|
4 |
+
|
5 |
+
It is named for the Austrian mathematician Eduard Helly.
|
6 |
+
|
7 |
+
A more general version of the theorem asserts compactness of the space BV<sub>loc</sub> of functions locally of bounded total variation that are uniformly bounded at a point.
|
8 |
+
|
9 |
+
The theorem has applications throughout mathematical analysis. In probability theory, the result implies compactness of a tight family of measures.
|
10 |
+
|
11 |
+
Let (f<sub>n</sub>)<sub>n ∈ N</sub> be a sequence of increasing functions mapping the real line R into itself,
|
12 |
+
|
13 |
+
and suppose that it is uniformly bounded: there are a,b ∈ R such that a ≤ f<sub>n</sub> ≤ b for every n ∈ N.
|
14 |
+
|
15 |
+
Then the sequence (f<sub>n</sub>)<sub>n ∈ N</sub> admits a pointwise convergent subsequence.
|
16 |
+
|
17 |
+
Let U be an open subset of the real line and let f<sub>n</sub> : U → R, n ∈ N, be a sequence of functions. Suppose that
|
18 |
+
|
19 |
+
* (f<sub>n</sub>) has uniformly bounded total variation on any W that is compactly embedded in U. That is, for all sets W ⊆ U with compact closure W̄ ⊆ U,
|
20 |
+
$$
|
21 |
+
\sup_{n \in \mathbf{N}} \left( \left\| f_{n} \right\|_{L^{1} (W)} + \left\| \frac{\mathrm{d} f_{n}}{\mathrm{d} t} \right\|_{L^{1} (W)} \right) < + \infty,
|
22 |
+
$$
|
23 |
+
|
24 |
+
where the derivative is taken in the sense of tempered distributions;
|
25 |
+
|
26 |
+
* and (f<sub>n</sub>) is uniformly bounded at a point. That is, for some t ∈ U, { f<sub>n</sub>(t) | n ∈ N } ⊆ R is a bounded set.
|
27 |
+
|
28 |
+
Then there exists a subsequence f<sub>n<sub>k</sub></sub>, k ∈ N, of f<sub>n</sub> and a function f : U → R, locally of bounded variation, such that
|
29 |
+
|
30 |
+
* f<sub>n<sub>k</sub></sub> converges to f pointwise;
|
31 |
+
|
32 |
+
* and f<sub>n<sub>k</sub></sub> converges to f locally in L<sup>1</sup> (see locally integrable function), i.e., for all W compactly embedded in U,
|
33 |
+
$$
|
34 |
+
\lim_{k \to \infty} \int_{W} \big| f_{n_{k}} (x) - f(x) \big| \mathrm{d} x = 0;
|
35 |
+
$$
|
36 |
+
|
37 |
+
* and, for W compactly embedded in U,
|
38 |
+
$$
|
39 |
+
\left\| \frac{\mathrm{d} f}{\mathrm{d} t} \right\|_{L^{1} (W)} \leq \liminf_{k \to \infty} \left\| \frac{\mathrm{d} f_{n_{k}}}{\mathrm{d} t} \right\|_{L^{1} (W)}.
|
40 |
+
$$
|
41 |
+
|
42 |
+
There are many generalizations and refinements of Helly's theorem. The following theorem, for BV functions taking values in Banach spaces, is due to Barbu and Precupanu:
|
43 |
+
|
44 |
+
Let X be a reflexive, separable Hilbert space and let E be a closed, convex subset of X. Let Δ : X → [0, +∞) be positive-definite and homogeneous of degree one. Suppose that z<sub>n</sub> is a uniformly bounded sequence in BV([0, T]; X) with z<sub>n</sub>(t) ∈ E for all n ∈ N and t ∈ [0, T]. Then there exists a subsequence z<sub>n<sub>k</sub></sub> and functions δ, z ∈ BV([0, T]; X) such that
|
45 |
+
|
46 |
+
* for all t ∈ [0, T],
|
47 |
+
$$
|
48 |
+
\int_{[0, t)} \Delta (\mathrm{d} z_{n_{k}}) \to \delta(t);
|
49 |
+
$$
|
50 |
+
|
51 |
+
* and, for all t ∈ [0, T],
|
52 |
+
$$
|
53 |
+
z_{n_{k}} (t) \rightharpoonup z(t) \in E;
|
54 |
+
$$
|
55 |
+
|
56 |
+
* and, for all 0 ≤ s < t ≤ T,
|
57 |
+
$$
|
58 |
+
\int_{[s, t)} \Delta(\mathrm{d} z) \leq \delta(t) - \delta(s).
|
59 |
+
$$
|
wiki/wikipedia/1034.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In functional analysis, the Birkhoff–Kellogg invariant-direction theorem, named after G. D. Birkhoff and O. D. Kellogg, is a generalization of the Brouwer fixed-point theorem. The theorem states that:
|
2 |
+
|
3 |
+
Let U be a bounded open neighborhood of 0 in an infinite-dimensional normed linear space V, and let F:∂U → V be a compact map satisfying ||F(x)|| ≥ α for some α > 0 for all x in ∂U. Then F has an invariant direction, i.e., there exist some x<sub>o</sub> and some λ > 0 satisfying x<sub>o</sub> = λF(x<sub>o</sub>).
|
4 |
+
|
5 |
+
The Birkhoff–Kellogg theorem and its generalizations by Schauder and Leray have applications to partial differential equations.
|
wiki/wikipedia/1035.txt
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, the Gordon–Luecke theorem on knot complements states that if the complements of two tame knots are homeomorphic, then the knots are equivalent. In particular, any homeomorphism between knot complements must take a meridian to a meridian.
|
2 |
+
|
3 |
+
The theorem is usually stated as "knots are determined by their complements"; however this is slightly ambiguous as it considers two knots to be equivalent if there is a self-homeomorphism taking one knot to the other. Thus mirror images are neglected. Often two knots are considered equivalent if they are isotopic. The correct version in this case is that if two knots have complements which are orientation-preserving homeomorphic, then they are isotopic.
|
4 |
+
|
5 |
+
These results follow from the following (also called the Gordon–Luecke theorem): no nontrivial Dehn surgery on a nontrivial knot in the 3-sphere can yield the 3-sphere.
|
6 |
+
|
7 |
+
The theorem was proved by Cameron Gordon and John Luecke. Essential ingredients of the proof are their joint work with Marc Culler and Peter Shalen on the cyclic surgery theorem, combinatorial techniques in the style of Litherland, thin position, and Scharlemann cycles.
|
8 |
+
|
9 |
+
For link complements, it is not in fact true that links are determined by their complements. For example, JHC Whitehead proved that there are infinitely many links whose complements are all homeomorphic to the Whitehead link. His construction is to twist along a disc spanning an unknotted component (as is the case for either component of the Whitehead link). Another method is to twist along an annulus spanning two components. Gordon proved that for the class of links where these two constructions are not possible there are finitely many links in this class with a given complement.
|
wiki/wikipedia/1036.txt
ADDED
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series $\sum_{n=1}^\infty a_n$.
|
2 |
+
|
3 |
+
If the limit of the summand is undefined or nonzero, that is $\lim_{n \to \infty}a_n \ne 0$, then the series must diverge. In this sense, the partial sums are Cauchy only if this limit exists and is equal to zero. The test is inconclusive if the limit of the summand is zero. This is also known as the nth-term test, test for divergence, or the divergence test.
|
4 |
+
|
5 |
+
This is also known as d'Alembert's criterion.
|
6 |
+
|
7 |
+
Suppose that there exists $r$ such that
|
8 |
+
$$
|
9 |
+
\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right| = r.
|
10 |
+
$$
|
11 |
+
|
12 |
+
If r < 1, then the series is absolutely convergent. If r > 1, then the series diverges. If r = 1, the ratio test is inconclusive, and the series may converge or diverge.
|
13 |
+
|
14 |
+
This is also known as the nth root test or Cauchy's criterion.
|
15 |
+
|
16 |
+
Let
|
17 |
+
$$
|
18 |
+
r=\limsup_{n\to\infty}\sqrt[n],
|
19 |
+
$$
|
20 |
+
|
21 |
+
where $\limsup$ denotes the limit superior (possibly $\infty$; if the limit exists it is the same value).
|
22 |
+
|
23 |
+
If r < 1, then the series converges. If r > 1, then the series diverges. If r = 1, the root test is inconclusive, and the series may converge or diverge.
|
24 |
+
|
25 |
+
The root test is stronger than the ratio test: whenever the ratio test determines the convergence or divergence of an infinite series, the root test does too, but not conversely.
|
26 |
+
|
27 |
+
For example, for the series
|
28 |
+
|
29 |
+
1 + 1 + 0.5 + 0.5 + 0.25 + 0.25 + 0.125 + 0.125 + ... = 4,
|
30 |
+
|
31 |
+
convergence follows from the root test but not from the ratio test.
|
32 |
+
|
33 |
+
The series can be compared to an integral to establish convergence or divergence. Let $f:[1,\infty)\to\R_+$ be a non-negative and monotonically decreasing function such that $f(n) = a_n$. If
|
34 |
+
|
35 |
+
<math display="block">\int_1^\infty f(x) dx=\lim_{t\to\infty}\int_1^t f(x) dx<\infty,</math>
|
36 |
+
|
37 |
+
then the series converges. But if the integral diverges, then the series does so as well.
|
38 |
+
|
39 |
+
In other words, the series ${a_n}$ converges if and only if the integral converges.
|
40 |
+
|
41 |
+
A commonly-used corollary of the integral test is the p-series test. Let $k > 0$. Then $\sum_{n=k}^{\infty} \bigg(\frac{1}{n^p}\bigg)$ converges if $p > 1$.
|
42 |
+
|
43 |
+
The case of $p = 1, k = 1$ yields the harmonic series, which diverges. The case of $p = 2, k = 1$ is the Basel problem and the series converges to $\frac{\pi^2}{6}$. In general, for $p > 1, k = 1$, the series is equal to the Riemann zeta function applied to $p$, that is $\zeta(p)$.
|
44 |
+
|
45 |
+
If the series $\sum_{n=1}^\infty b_n$ is an absolutely convergent series and $|a_n|\le |b_n|$ for sufficiently large n , then the series $\sum_{n=1}^\infty a_n$ converges absolutely.
|
46 |
+
|
47 |
+
If $\{a_n\},\{b_n\}>0$, (that is, each element of the two sequences is positive) and the limit $\lim_{n\to\infty} \frac{a_n}{b_n}$ exists, is finite and non-zero, then $\sum_{n=1}^\infty a_n$ diverges if and only if $\sum_{n=1}^\infty b_n$ diverges.
|
48 |
+
|
49 |
+
Let $\left \{ a_n \right \}$ be a positive non-increasing sequence. Then the sum $A = \sum_{n=1}^\infty a_n$ converges if and only if the sum $A^* = \sum_{n=0}^\infty 2^n a_{2^n}$ converges. Moreover, if they converge, then $A \leq A^* \leq 2A$ holds.
|
50 |
+
|
51 |
+
Suppose the following statements are true:
|
52 |
+
|
53 |
+
# $\sum a_n $ is a convergent series,
|
54 |
+
|
55 |
+
# $\left\{b_n\right\}$ is a monotonic sequence, and
|
56 |
+
|
57 |
+
# $\left\{b_n\right\}$ is bounded.
|
58 |
+
|
59 |
+
Then $\sum a_nb_n $ is also convergent.
|
60 |
+
|
61 |
+
Every absolutely convergent series converges.
|
62 |
+
|
63 |
+
Suppose the following statements are true:
|
64 |
+
|
65 |
+
* $ a_n $ are all positive,
|
66 |
+
|
67 |
+
* $ \lim_{n \to \infty} a_n = 0 $ and
|
68 |
+
|
69 |
+
* for every n, $ a_{n+1} \le a_n $.
|
70 |
+
|
71 |
+
Then $ \sum_{n = k}^\infty (-1)^{n} a_n $ and $ \sum_{n = k}^\infty (-1)^{n+1} a_n $ are convergent series.
|
72 |
+
|
73 |
+
This test is also known as the Leibniz criterion.
|
74 |
+
|
75 |
+
If $\{a_n\}$ is a sequence of real numbers and $\{b_n\}$ a sequence of complex numbers satisfying
|
76 |
+
|
77 |
+
* $a_n \geq a_{n+1}$
|
78 |
+
|
79 |
+
* $\lim_{n \rightarrow \infty}a_n = 0$
|
80 |
+
|
81 |
+
* $\left|\sum^{N}_{n=1}b_n\right|\leq M$ for every positive integer N
|
82 |
+
|
83 |
+
where M is some constant, then the series
|
84 |
+
$$
|
85 |
+
\sum^{\infty}_{n=1}a_n b_n
|
86 |
+
$$
|
87 |
+
|
88 |
+
converges.
|
89 |
+
|
90 |
+
Let $a_n>0$.
|
91 |
+
|
92 |
+
Define
|
93 |
+
$$
|
94 |
+
b_n=n\left(\frac{a_n}{a_{n+1}}-1 \right).
|
95 |
+
$$
|
96 |
+
|
97 |
+
If
|
98 |
+
$$
|
99 |
+
L=\lim_{n\to\infty}b_n
|
100 |
+
$$
|
101 |
+
|
102 |
+
exists there are three possibilities:
|
103 |
+
|
104 |
+
* if L > 1 the series converges
|
105 |
+
|
106 |
+
* if L < 1 the series diverges
|
107 |
+
|
108 |
+
* and if L = 1 the test is inconclusive.
|
109 |
+
|
110 |
+
An alternative formulation of this test is as follows. Let { a<sub>n</sub> } be a series of real numbers. Then if b > 1 and K (a natural number) exist such that
|
111 |
+
$$
|
112 |
+
\left|\frac{a_{n+1}}{a_n}\right|\le 1-\frac{b}{n}
|
113 |
+
$$
|
114 |
+
|
115 |
+
for all n > K then the series {a<sub>n</sub>} is convergent.
|
116 |
+
|
117 |
+
Let { a<sub>n</sub> } be a sequence of positive numbers.
|
118 |
+
|
119 |
+
Define
|
120 |
+
$$
|
121 |
+
b_n=\ln n\left(n\left(\frac{a_n}{a_{n+1}}-1 \right)-1\right).
|
122 |
+
$$
|
123 |
+
|
124 |
+
If
|
125 |
+
$$
|
126 |
+
L=\lim_{n\to\infty}b_n
|
127 |
+
$$
|
128 |
+
|
129 |
+
exists, there are three possibilities:
|
130 |
+
|
131 |
+
* if L > 1 the series converges
|
132 |
+
|
133 |
+
* if L < 1 the series diverges
|
134 |
+
|
135 |
+
* and if L = 1 the test is inconclusive.
|
136 |
+
|
137 |
+
Let { a<sub>n</sub> } be a sequence of positive numbers. If $\frac{a_n}{a_{n + 1}} = 1+ \frac{\alpha}{n} + O(1/n^\beta)$ for some β > 1, then $ \sum a_n$ converges if α > 1 and diverges if α ≤ 1.
|
138 |
+
|
139 |
+
Let { a<sub>n</sub> } be a sequence of positive numbers. Then:
|
140 |
+
|
141 |
+
(1) $ \sum a_n$ converges if and only if there is a sequence $b_{n}$ of positive numbers and a real number c > 0 such that $b_k (a_{k}/a_{k+1}) - b_{k+1} \ge c$.
|
142 |
+
|
143 |
+
(2) $ \sum a_n$ diverges if and only if there is a sequence $b_{n}$ of positive numbers such that $b_k (a_{k}/a_{k+1}) - b_{k+1} \le 0$
|
144 |
+
|
145 |
+
and $ \sum 1/b_{n}$ diverges.
|
146 |
+
|
147 |
+
*For some specific types of series there are more specialized convergence tests, for instance for Fourier series there is the Dini test.
|
148 |
+
|
149 |
+
Consider the series
|
150 |
+
|
151 |
+
{{NumBlk|:|$\sum_{n=1}^{\infty} \frac{1}{n^\alpha}.$|}}
|
152 |
+
|
153 |
+
Cauchy condensation test implies that () is finitely convergent if
|
154 |
+
|
155 |
+
{{NumBlk|:|$ \sum_{n=1}^\infty 2^n \left( \frac 1 {2^n}\right)^\alpha $|}}
|
156 |
+
|
157 |
+
is finitely convergent. Since
|
158 |
+
$$
|
159 |
+
\sum_{n=1}^\infty 2^n \left( \frac 1 {2^n} \right)^\alpha = \sum_{n=1}^\infty 2^{n-n\alpha} = \sum_{n=1}^\infty 2^{(1-\alpha) n}
|
160 |
+
$$
|
161 |
+
|
162 |
+
() is a geometric series with ratio $ 2^{(1-\alpha)} $. () is finitely convergent if its ratio is less than one (namely $\alpha > 1$). Thus, () is finitely convergent if and only if $\alpha > 1$.
|
163 |
+
|
164 |
+
While most of the tests deal with the convergence of infinite series, they can also be used to show the convergence or divergence of infinite products. This can be achieved using following theorem: Let $\left \{ a_n \right \}_{n=1}^\infty$ be a sequence of positive numbers. Then the infinite product $\prod_{n=1}^\infty (1 + a_n)$ converges if and only if the series $\sum_{n=1}^\infty a_n$ converges. Also similarly, if $0 < a_n < 1$ holds, then $\prod_{n=1}^\infty (1 - a_n)$ approaches a non-zero limit if and only if the series $\sum_{n=1}^\infty a_n$ converges .
|
165 |
+
|
166 |
+
This can be proved by taking the logarithm of the product and using limit comparison test.
|
wiki/wikipedia/1037.txt
ADDED
@@ -0,0 +1,305 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form
|
2 |
+
$$
|
3 |
+
f(x) = a \cdot \exp\left( -\frac{(x - b)^2}{2c^2} \right)
|
4 |
+
$$
|
5 |
+
|
6 |
+
for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c (the standard deviation, sometimes called the Gaussian RMS width) controls the width of the "bell".
|
7 |
+
|
8 |
+
Gaussian functions are often used to represent the probability density function of a normally distributed random variable with expected value <var>μ</var> = <var>b</var> and variance <var>σ</var>^2 = <var>c</var>^2. In this case, the Gaussian is of the form
|
9 |
+
$$
|
10 |
+
g(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left( -\frac{1}{2} \frac{(x - \mu)^2}{\sigma^2} \right).
|
11 |
+
$$
|
12 |
+
|
13 |
+
Gaussian functions are widely used in statistics to describe the normal distributions, in signal processing to define Gaussian filters, in image processing where two-dimensional Gaussians are used for Gaussian blurs, and in mathematics to solve heat equations and diffusion equations and to define the Weierstrass transform.
|
14 |
+
|
15 |
+
Gaussian functions arise by composing the exponential function with a concave quadratic function:
|
16 |
+
$$
|
17 |
+
f(x) = \exp(\alpha x^2 + \beta x + \gamma),
|
18 |
+
$$
|
19 |
+
|
20 |
+
where
|
21 |
+
$$
|
22 |
+
\alpha = -1/2c^2,
|
23 |
+
$$
|
24 |
+
$$
|
25 |
+
\beta = b/c^2,
|
26 |
+
$$
|
27 |
+
$$
|
28 |
+
\gamma = \ln a-(b^2 / 2c^2).
|
29 |
+
$$
|
30 |
+
|
31 |
+
The Gaussian functions are thus those functions whose logarithm is a concave quadratic function.
|
32 |
+
|
33 |
+
The parameter c is related to the full width at half maximum (FWHM) of the peak according to
|
34 |
+
$$
|
35 |
+
\text{FWHM} = 2 \sqrt{2 \ln 2}c \approx 2.35482c.
|
36 |
+
$$
|
37 |
+
|
38 |
+
The function may then be expressed in terms of the FWHM, represented by w:
|
39 |
+
$$
|
40 |
+
f(x) = a e^{-4 (\ln 2) (x - b)^2 / w^2}.
|
41 |
+
$$
|
42 |
+
|
43 |
+
Alternatively, the parameter c can be interpreted by saying that the two inflection points of the function occur at <var>x</var> = <var>b</var> ± <var>c</var>.
|
44 |
+
|
45 |
+
The full width at tenth of maximum (FWTM) for a Gaussian could be of interest and is
|
46 |
+
$$
|
47 |
+
\text{FWTM} = 2 \sqrt{2 \ln 10}c \approx 4.29193c.
|
48 |
+
$$
|
49 |
+
|
50 |
+
Gaussian functions are analytic, and their limit as <var>x</var> → ∞ is 0 (for the above case of <var>b</var> = 0).
|
51 |
+
|
52 |
+
Gaussian functions are among those functions that are elementary but lack elementary antiderivatives; the integral of the Gaussian function is the error function. Nonetheless, their improper integrals over the whole real line can be evaluated exactly, using the Gaussian integral
|
53 |
+
$$
|
54 |
+
\int_{-\infty}^\infty e^{-x^2} dx = \sqrt{\pi},
|
55 |
+
$$
|
56 |
+
|
57 |
+
and one obtains
|
58 |
+
$$
|
59 |
+
\int_{-\infty}^\infty a e^{-(x - b)^2 / (2c^2)} dx = ac \cdot \sqrt{2\pi}.
|
60 |
+
$$
|
61 |
+
|
62 |
+
This integral is 1 if and only if $a = \tfrac{1}{c\sqrt{2\pi}}$ (the normalizing constant), and in this case the Gaussian is the probability density function of a normally distributed random variable with expected value <var>μ</var> = <var>b</var> and variance <var>σ</var>^2 = <var>c</var>^2:
|
63 |
+
$$
|
64 |
+
g(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(\frac{-(x - \mu)^2}{2\sigma^2} \right).
|
65 |
+
$$
|
66 |
+
|
67 |
+
These Gaussians are plotted in the accompanying figure.
|
68 |
+
|
69 |
+
Gaussian functions centered at zero minimize the Fourier uncertainty principle.
|
70 |
+
|
71 |
+
The product of two Gaussian functions is a Gaussian, and the convolution of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances: $c^2 = c_1^2 + c_2^2$. The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF.
|
72 |
+
|
73 |
+
Taking the Fourier transform (unitary, angular-frequency convention) of a Gaussian function with parameters <var>a</var> = 1, <var>b</var> = 0 and <var>c</var> yields another Gaussian function, with parameters $c$, <var>b</var> = 0 and $1/c$. So in particular the Gaussian functions with <var>b</var> = 0 and $c = 1$ are kept fixed by the Fourier transform (they are eigenfunctions of the Fourier transform with eigenvalue 1).
|
74 |
+
|
75 |
+
A physical realization is that of the diffraction pattern: for example, a photographic slide whose transmittance has a Gaussian variation is also a Gaussian function.
|
76 |
+
|
77 |
+
The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting identity from the Poisson summation formula:
|
78 |
+
$$
|
79 |
+
\sum_{k\in\Z} \exp\left(-\pi \cdot \left(\frac{k}{c}\right)^2\right) = c \cdot \sum_{k\in\Z} \exp\left(-\pi \cdot (kc)^2\right).
|
80 |
+
$$
|
81 |
+
|
82 |
+
The integral of an arbitrary Gaussian function is
|
83 |
+
$$
|
84 |
+
\int_{-\infty}^\infty ae^{-(x - b)^2/2c^2}dx = \sqrt{2} a |c| \sqrt{\pi}.
|
85 |
+
$$
|
86 |
+
|
87 |
+
An alternative form is
|
88 |
+
$$
|
89 |
+
\int_{-\infty}^\infty ke^{-f x^2 + g x + h}dx = \int_{-\infty}^\infty ke^{-f \big(x - g/(2f)\big)^2 + g^2/(4f) + h}dx = k\sqrt{\frac{\pi}{f}}\exp\left(\frac{g^2}{4f} + h\right),
|
90 |
+
$$
|
91 |
+
|
92 |
+
where f must be strictly positive for the integral to converge.
|
93 |
+
|
94 |
+
The integral
|
95 |
+
$$
|
96 |
+
\int_{-\infty}^\infty ae^{-(x - b)^2/2c^2}dx
|
97 |
+
$$
|
98 |
+
|
99 |
+
for some real constants a, b, c > 0 can be calculated by putting it into the form of a Gaussian integral. First, the constant a can simply be factored out of the integral. Next, the variable of integration is changed from x to <var>y</var> = <var>x</var> − b:
|
100 |
+
$$
|
101 |
+
a\int_{-\infty}^\infty e^{-y^2/2c^2}dy,
|
102 |
+
$$
|
103 |
+
|
104 |
+
and then to $z = y/\sqrt{2 c^2}$:
|
105 |
+
$$
|
106 |
+
a\sqrt{2 c^2} \int_{-\infty}^\infty e^{-z^2}dz.
|
107 |
+
$$
|
108 |
+
|
109 |
+
Then, using the Gaussian integral identity
|
110 |
+
$$
|
111 |
+
\int_{-\infty}^\infty e^{-z^2}dz = \sqrt{\pi},
|
112 |
+
$$
|
113 |
+
|
114 |
+
we have
|
115 |
+
$$
|
116 |
+
\int_{-\infty}^\infty ae^{-(x-b)^2/2c^2}dx = a\sqrt{2\pi c^2}.
|
117 |
+
$$
|
118 |
+
|
119 |
+
In two dimensions, the power to which e is raised in the Gaussian function is any negative-definite quadratic form. Consequently, the level sets of the Gaussian will always be ellipses.
|
120 |
+
|
121 |
+
A particular example of a two-dimensional Gaussian function is
|
122 |
+
$$
|
123 |
+
f(x, y) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2} + \frac{(y - y_0)^2}{2\sigma_Y^2} \right)\right).
|
124 |
+
$$
|
125 |
+
|
126 |
+
Here the coefficient A is the amplitude, x<sub>0</sub>, y<sub>0</sub> is the center, and σ<sub>x</sub>, σ<sub>y</sub> are the x and y spreads of the blob. The figure on the right was created using A = 1, x<sub>0</sub> = 0, y<sub>0</sub> = 0, σ<sub>x</sub> = σ<sub>y</sub> = 1.
|
127 |
+
|
128 |
+
The volume under the Gaussian function is given by
|
129 |
+
$$
|
130 |
+
V = \int_{-\infty}^\infty \int_{-\infty}^\infty f(x, y)dx dy = 2 \pi A \sigma_X \sigma_Y.
|
131 |
+
$$
|
132 |
+
|
133 |
+
In general, a two-dimensional elliptical Gaussian function is expressed as
|
134 |
+
$$
|
135 |
+
f(x, y) = A \exp\Big(-\big(a(x - x_0)^2 + 2b(x - x_0)(y - y_0) + c(y - y_0)^2 \big)\Big),
|
136 |
+
$$
|
137 |
+
|
138 |
+
where the matrix
|
139 |
+
$$
|
140 |
+
\begin{bmatrix} a & b \\ b & c \end{bmatrix}
|
141 |
+
$$
|
142 |
+
|
143 |
+
is positive-definite.
|
144 |
+
|
145 |
+
Using this formulation, the figure on the right can be created using A = 1, (x<sub>0</sub>, y<sub>0</sub>) = (0, 0), a = c = 1/2, b = 0.
|
146 |
+
|
147 |
+
For the general form of the equation the coefficient A is the height of the peak and (x<sub>0</sub>, y<sub>0</sub>) is the center of the blob.
|
148 |
+
|
149 |
+
If we set
|
150 |
+
|
151 |
+
<math>
|
152 |
+
|
153 |
+
\begin{align}
|
154 |
+
|
155 |
+
a &= \frac{\cos^2\theta}{2\sigma_X^2} + \frac{\sin^2\theta}{2\sigma_Y^2}, \\
|
156 |
+
|
157 |
+
b &= -\frac{\sin 2\theta}{4\sigma_X^2} + \frac{\sin 2\theta}{4\sigma_Y^2}, \\
|
158 |
+
|
159 |
+
c &= \frac{\sin^2\theta}{2\sigma_X^2} + \frac{\cos^2\theta}{2\sigma_Y^2},
|
160 |
+
|
161 |
+
\end{align}
|
162 |
+
|
163 |
+
</math>
|
164 |
+
|
165 |
+
then we rotate the blob by a clockwise angle $\theta$ (for counterclockwise rotation, invert the signs in the b coefficient). This can be seen in the following examples:
|
166 |
+
|
167 |
+
Using the following Octave code, one can easily see the effect of changing the parameters:
|
168 |
+
|
169 |
+
<syntaxhighlight lang="octave">
|
170 |
+
|
171 |
+
A = 1;
|
172 |
+
|
173 |
+
x0 = 0; y0 = 0;
|
174 |
+
|
175 |
+
sigma_X = 1;
|
176 |
+
|
177 |
+
sigma_Y = 2;
|
178 |
+
|
179 |
+
[X, Y] = meshgrid(-5:.1:5, -5:.1:5);
|
180 |
+
|
181 |
+
for theta = 0:pi/100:pi
|
182 |
+
|
183 |
+
a = cos(theta)^2 / (2 * sigma_X^2) + sin(theta)^2 / (2 * sigma_Y^2);
|
184 |
+
|
185 |
+
b = -sin(2 * theta) / (4 * sigma_X^2) + sin(2 * theta) / (4 * sigma_Y^2);
|
186 |
+
|
187 |
+
c = sin(theta)^2 / (2 * sigma_X^2) + cos(theta)^2 / (2 * sigma_Y^2);
|
188 |
+
|
189 |
+
Z = A * exp(-(a * (X - x0).^2 + 2 * b * (X - x0) .* (Y - y0) + c * (Y - y0).^2));
|
190 |
+
|
191 |
+
surf(X, Y, Z);
|
192 |
+
|
193 |
+
shading interp;
|
194 |
+
|
195 |
+
view(-36, 36)
|
196 |
+
|
197 |
+
waitforbuttonpress
|
198 |
+
|
199 |
+
end
|
200 |
+
|
201 |
+
</syntaxhighlight>
|
202 |
+
|
203 |
+
Such functions are often used in image processing and in computational models of visual system function—see the articles on scale space and affine shape adaptation.
|
204 |
+
|
205 |
+
Also see multivariate normal distribution.
|
206 |
+
|
207 |
+
A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a power $P$:
|
208 |
+
$$
|
209 |
+
f(x) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2}\right)^P\right).
|
210 |
+
$$
|
211 |
+
|
212 |
+
This function is known as a super-Gaussian function and is often used for Gaussian beam formulation. In a two-dimensional formulation, a Gaussian function along $x$ and $y$ can be combined with potentially different $P_X$ and $P_Y$ to form an elliptical Gaussian distribution:
|
213 |
+
$$
|
214 |
+
f(x , y) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2} + \frac{(y - y_0)^2}{2\sigma_Y^2}\right)^P\right)
|
215 |
+
$$
|
216 |
+
|
217 |
+
or a rectangular Gaussian distribution:
|
218 |
+
$$
|
219 |
+
f(x, y) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2}\right)^{P_X} - \left(\frac{(y - y_0)^2}{2\sigma_Y^2}\right)^{P_Y}\right).
|
220 |
+
$$
|
221 |
+
|
222 |
+
In an $n$-dimensional space a Gaussian function can be defined as
|
223 |
+
$$
|
224 |
+
f(x) = \exp(-x^T C x),
|
225 |
+
$$
|
226 |
+
|
227 |
+
where $x = \{x_1, \dots, x_n\}$ is a column of $n$ coordinates, $C$ is a positive-definite $n \times n$ matrix, and ${}^T$ denotes matrix transposition.
|
228 |
+
|
229 |
+
The integral of this Gaussian function over the whole $n$-dimensional space is given as
|
230 |
+
$$
|
231 |
+
\int_{\mathbb{R}^n} \exp(-x^TCx) dx = \sqrt{\frac{\pi^n}{\det C}}.
|
232 |
+
$$
|
233 |
+
|
234 |
+
It can be easily calculated by diagonalizing the matrix $C$ and changing the integration variables to the eigenvectors of $C$.
|
235 |
+
|
236 |
+
More generally a shifted Gaussian function is defined as
|
237 |
+
$$
|
238 |
+
f(x) = \exp(-x^T C x + s^T x),
|
239 |
+
$$
|
240 |
+
|
241 |
+
where $s = \{s_1, \dots, s_n\}$ is the shift vector and the matrix $C$ can be assumed to be symmetric, $C^T = C$, and positive-definite. The following integrals with this function can be calculated with the same technique:
|
242 |
+
$$
|
243 |
+
\int_{\mathbb{R}^n} e^{-x^T C x+v^Tx} dx = \sqrt{\frac{\pi^n}{\det{C}}} \exp\left(\frac{1}{4}v^T C^{-1}v\right) \equiv \mathcal{M}.
|
244 |
+
$$
|
245 |
+
$$
|
246 |
+
\int_{\mathbb{R}^n} e^{- x^T C x + v^T x} (a^T x) dx = (a^T u) \cdot \mathcal{M}, \text{ where } u = \frac{1}{2} C^{-1} v.
|
247 |
+
$$
|
248 |
+
$$
|
249 |
+
\int_{\mathbb{R}^n} e^{- x^T C x + v^T x} (x^T D x) dx = \left( u^T D u + \frac{1}{2} \operatorname{tr} (D C^{-1}) \right) \cdot \mathcal{M}.
|
250 |
+
$$
|
251 |
+
$$
|
252 |
+
\int_{\mathbb{R}^n} e^{- x^T C' x + s'^T x} \left( -\frac{\partial}{\partial x} \Lambda \frac{\partial}{\partial x} \right) e^{-x^T C x + s^T x} dx = {}
|
253 |
+
$$
|
254 |
+
$$
|
255 |
+
\qquad = \left( 2 \operatorname{tr}(C' \Lambda C B^{- 1}) + 4 u^T C' \Lambda C u - 2 u^T (C' \Lambda s + C \Lambda s') + s'^T \Lambda s \right) \cdot \mathcal{M},
|
256 |
+
$$
|
257 |
+
|
258 |
+
where $u = \frac{1}{2} B^{- 1} v,\ v = s + s',\ B = C + C'.$
|
259 |
+
|
260 |
+
A number of fields such as stellar photometry, Gaussian beam characterization, and emission/absorption line spectroscopy work with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (a, b, c) and five for a 2D Gaussian function $(A; x_0,y_0; \sigma_X,\sigma_Y)$.
|
261 |
+
|
262 |
+
The most common method for estimating the Gaussian parameters is to take the logarithm of the data and fit a parabola to the resulting data set. While this provides a simple curve fitting procedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem through weighted least squares estimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use an iteratively reweighted least squares procedure, in which the weights are updated at each iteration.
|
263 |
+
|
264 |
+
# The noise in the measured profile is either i.i.d. Gaussian, or the noise is Poisson-distributed.
|
265 |
+
|
266 |
+
# The spacing between each sampling (i.e. the distance between pixels measuring the data) is uniform.
|
267 |
+
|
268 |
+
# The peak is "well-sampled", so that less than 10% of the area or volume under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside the measurement region.
|
269 |
+
|
270 |
+
# The width of the peak is much larger than the distance between sample locations (i.e. the detector pixels must be at least 5 times smaller than the Gaussian FWHM).
|
271 |
+
|
272 |
+
When these assumptions are satisfied, the following covariance matrix K applies for the 1D profile parameters $a$, $b$, and $c$ under i.i.d. Gaussian noise and under Poisson noise:
|
273 |
+
$$
|
274 |
+
T(n, t) = e^{-t} I_n(t)
|
275 |
+
$$
|
276 |
+
|
277 |
+
where $I_n(t)$ denotes the modified Bessel functions of integer order.
|
278 |
+
|
279 |
+
This is the discrete analog of the continuous Gaussian in that it is the solution to the discrete diffusion equation (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.
|
280 |
+
|
281 |
+
Gaussian functions appear in many contexts in the natural sciences, the social sciences, mathematics, and engineering. Some examples include:
|
282 |
+
|
283 |
+
* In statistics and probability theory, Gaussian functions appear as the density function of the normal distribution, which is a limiting probability distribution of complicated sums, according to the central limit theorem.
|
284 |
+
|
285 |
+
* Gaussian functions are the Green's function for the (homogeneous and isotropic) diffusion equation (and to the heat equation, which is the same thing), a partial differential equation that describes the time evolution of a mass-density under diffusion. Specifically, if the mass-density at time t=0 is given by a Dirac delta, which essentially means that the mass is initially concentrated in a single point, then the mass-distribution at time t will be given by a Gaussian function, with the parameter a being linearly related to 1/t and c being linearly related to t; this time-varying Gaussian is described by the heat kernel. More generally, if the initial mass-density is φ(x), then the mass-density at later times is obtained by taking the convolution of φ with a Gaussian function. The convolution of a function with a Gaussian is also known as a Weierstrass transform.
|
286 |
+
|
287 |
+
* A Gaussian function is the wave function of the ground state of the quantum harmonic oscillator.
|
288 |
+
|
289 |
+
* The molecular orbitals used in computational chemistry can be linear combinations of Gaussian functions called Gaussian orbitals (see also basis set (chemistry)).
|
290 |
+
|
291 |
+
* Mathematically, the derivatives of the Gaussian function can be represented using Hermite functions. For unit variance, the n-th derivative of the Gaussian is the Gaussian function itself multiplied by the n-th Hermite polynomial, up to scale.
|
292 |
+
|
293 |
+
* Consequently, Gaussian functions are also associated with the vacuum state in quantum field theory.
|
294 |
+
|
295 |
+
* Gaussian beams are used in optical systems, microwave systems and lasers.
|
296 |
+
|
297 |
+
* In scale space representation, Gaussian functions are used as smoothing kernels for generating multi-scale representations in computer vision and image processing. Specifically, derivatives of Gaussians (Hermite functions) are used as a basis for defining a large number of types of visual operations.
|
298 |
+
|
299 |
+
* Gaussian functions are used to define some types of artificial neural networks.
|
300 |
+
|
301 |
+
* In fluorescence microscopy a 2D Gaussian function is used to approximate the Airy disk, describing the intensity distribution produced by a point source.
|
302 |
+
|
303 |
+
* In signal processing they serve to define Gaussian filters, such as in image processing where 2D Gaussians are used for Gaussian blurs. In digital signal processing, one uses a discrete Gaussian kernel, which may be defined by sampling a Gaussian, or in a different way.
|
304 |
+
|
305 |
+
* In geostatistics they have been used for understanding the variability between the patterns of a complex training image. They are used with kernel methods to cluster the patterns in the feature space.
|
wiki/wikipedia/1038.txt
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
File synchronization (or syncing) in computing is the process of ensuring that computer files in two or more locations are updated via certain rules.
|
2 |
+
|
3 |
+
In one-way file synchronization, also called mirroring, updated files are copied from a source location to one or more target locations, but no files are copied back to the source location. In two-way file synchronization, updated files are copied in both directions, usually with the purpose of keeping the two locations identical to each other. In this article, the term synchronization refers exclusively to two-way file synchronization.
|
4 |
+
|
5 |
+
File synchronization is commonly used for home backups on external hard drives or updating for transport on USB flash drives. BitTorrent Sync, Dropbox and SKYSITE are prominent products. Some backup software also support real-time file sync. The automatic process prevents copying already identical files and thus can be faster and save much time versus a manual copy, and is less error prone. However this suffers from the limit that the synchronized files must physically fit in the portable storage device. Synchronization software that only keeps a list of files and the changed files eliminates this problem (e.g. the "snapshot" feature in Beyond Compare or the "package" feature in Synchronize It!). It is especially useful for mobile workers, or others that work on multiple computers.
|
6 |
+
|
7 |
+
It is possible to synchronize multiple locations by synchronizing them one pair at a time. The Unison Manual describes how to do this:
|
8 |
+
|
9 |
+
If you need to do this, the most reliable way to set things up is to organize the machines into a "star topology," with one machine designated as the "hub" and the rest as "spokes," and with each spoke machine synchronizing only with the hub. The big advantage of the star topology is that it eliminates the possibility of confusing "spurious conflicts" arising from the fact that a separate archive is maintained by Unison for every pair of hosts that it synchronizes.
|
10 |
+
|
11 |
+
Common features of file synchronization systems include:
|
12 |
+
|
13 |
+
* Encryption for security, especially when synchronizing across the Internet.
|
14 |
+
|
15 |
+
* Compressing any data sent across a network.
|
16 |
+
|
17 |
+
* Conflict detection where a file has been modified on both sources, as opposed to where it has only been modified on one. Undetected conflicts can lead to overwriting copies of the file with the most recent version, causing data loss. For conflict detection, the synchronization software needs to keep a database of the synchronized files. Distributed conflict detection can be achieved by version vectors.
|
18 |
+
|
19 |
+
* Open Files Support ensures data integrity when copying data or application files that are in-use or database files that are exclusively locked.
|
20 |
+
|
21 |
+
* Specific support for using an intermediate storage device, such as a removable flash disc, to synchronize two machines. Most synchronizing programs can be used in this way, but providing specific support for this can reduce the amount of data stored on a device.
|
22 |
+
|
23 |
+
* The ability to preview any changes before they are made.
|
24 |
+
|
25 |
+
* The ability to view differences in individual files.
|
26 |
+
|
27 |
+
* Backup between operating systems and transfer between network computers.
|
28 |
+
|
29 |
+
* Ability to edit or use files on multiple computers or operating systems.
|
30 |
+
|
31 |
+
Consumer-grade file synchronization solutions are popular, however for business use, they create a concern of allowing corporate information to sprawl to unmanaged devices and cloud services which are uncontrolled by the organization.
|
wiki/wikipedia/1039.txt
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states:
|
2 |
+
|
3 |
+
<math>
|
4 |
+
|
5 |
+
H(X,Y) = H(X) + H(Y|X)
|
6 |
+
|
7 |
+
</math>
|
8 |
+
|
9 |
+
That is, the combined randomness of two sequences X and Y is the sum of the randomness of X plus whatever randomness is left in Y once we know X.
|
10 |
+
|
11 |
+
This follows immediately from the definitions of conditional and joint entropy, and the fact from probability theory that the joint probability is the product of the marginal and conditional probability:
|
12 |
+
|
13 |
+
<math>
|
14 |
+
|
15 |
+
P(X,Y) = P(X) P(Y|X)
|
16 |
+
|
17 |
+
</math>
|
18 |
+
|
19 |
+
<math>
|
20 |
+
|
21 |
+
\Rightarrow \log P(X,Y) = \log P(X) + \log P(Y|X)
|
22 |
+
|
23 |
+
</math>
|
24 |
+
|
25 |
+
The equivalent statement for Kolmogorov complexity does not hold exactly; it is true only up to a logarithmic term:
|
26 |
+
|
27 |
+
<math>
|
28 |
+
|
29 |
+
K(x,y) = K(x) + K(y|x) + O(\log(K(x,y)))
|
30 |
+
|
31 |
+
</math>
|
32 |
+
|
33 |
+
(An exact version, KP(x, y) = KP(x) + KP(y|x*) + O(1),
|
34 |
+
|
35 |
+
holds for the prefix complexity KP, where x* is a shortest program for x.)
|
36 |
+
|
37 |
+
It states that the shortest program printing X and Y is obtained by concatenating a shortest program printing X with a program printing Y given X, plus at most a logarithmic factor. The results implies that algorithmic mutual information, an analogue of mutual information for Kolmogorov complexity is symmetric: I(x:y) = I(y:x) + O(log K(x,y)) for all x,y.
|
38 |
+
|
39 |
+
The ≤ direction is obvious: we can write a program to produce x and y by concatenating a program to produce x, a program to produce y given
|
40 |
+
|
41 |
+
access to x, and (whence the log term) the length of one of the programs, so
|
42 |
+
|
43 |
+
that we know where to separate the two programs for x and y|x (log(K(x, y)) upper-bounds this length).
|
44 |
+
|
45 |
+
For the ≥ direction, it suffices to show that for all k,l such that k+l = K(x,y) we have that either
|
46 |
+
|
47 |
+
K(x|k,l) ≤ k + O(1)
|
48 |
+
|
49 |
+
or
|
50 |
+
|
51 |
+
K(y|x,k,l) ≤ l + O(1).
|
52 |
+
|
53 |
+
Consider the list (a<sub>1</sub>,b<sub>1</sub>), (a<sub>2</sub>,b<sub>2</sub>), ..., (a<sub>e</sub>,b<sub>e</sub>) of all pairs (a,b) produced by programs of length exactly K(x,y) [hence K(a,b) ≤ K(x,y)]. Note that this list
|
54 |
+
|
55 |
+
* contains the pair (x,y),
|
56 |
+
|
57 |
+
* can be enumerated given k and l (by running all programs of length K(x,y) in parallel),
|
58 |
+
|
59 |
+
* has at most 2<sup>K(x,y)</sup> elements (because there are at most 2<sup>n</sup> programs of length n).
|
60 |
+
|
61 |
+
First, suppose that x appears less than 2<sup>l</sup> times as first element. We can specify y given x,k,l by enumerating (a<sub>1</sub>,b<sub>1</sub>), (a<sub>2</sub>,b<sub>2</sub>), ... and then selecting (x,y) in the sub-list of pairs (x,b). By assumption, the index of (x,y) in this sub-list is less than 2<sup>l</sup> and hence, there is a program for y given x,k,l of length l + O(1).
|
62 |
+
|
63 |
+
Now, suppose that x appears at least 2<sup>l</sup> times as first element. This can happen for at most 2<sup>K(x,y)-l</sup> = 2<sup>k</sup> different strings. These strings can be enumerated given k,l and hence x can be specified by its index in this enumeration. The corresponding program for x has size k + O(1). Theorem proved.
|
wiki/wikipedia/104.txt
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The Coase conjecture, developed first by Ronald Coase, is an argument in monopoly theory. The conjecture sets up a situation in which a monopolist sells a durable good to a market where resale is impossible and faces consumers who have different valuations. The conjecture proposes that a monopolist that does not know individuals' valuations will have to sell its product at a low price if the monopolist tries to separate consumers by offering different prices in different periods. This is because the monopoly is, in effect, in price competition with itself over several periods and the consumer with the highest valuation, if he is patient enough, can simply wait for the lowest price. Thus the monopolist will have to offer a competitive price in the first period which will be low. The conjecture holds only when there is an infinite time horizon, as otherwise a possible action for the monopolist would be to announce a very high price until the second to last period, and then sell at the static monopoly price in the last period. The monopolist could avoid this problem by committing to a stable linear pricing strategy or adopting other business strategies.
|
2 |
+
|
3 |
+
Imagine there are consumers, called $X$ and $Y$ with valuations of good with $x$ and $y$ respectively. The valuations are such as $x<y<2x$. The monopoly cannot directly identify individual consumers but it knows that there are 2 different valuations of a good. The good being sold is durable so that once a consumer buys it, he or she will still have it in all subsequent periods. This means that after the monopolist has sold to all consumers, there can be no further sales. Also assume that production is such that average cost and marginal cost are both equal to zero.
|
4 |
+
|
5 |
+
The monopolist could try to charge at a $\text{price} = y$ in the first period and then in the second period $\text{price} =x $, hence price discriminating. This will not result in consumer $Y$ buying in the first period because, by waiting, she could get price equal to $x$. To make consumer $Y$ indifferent between buying in the first period or the second period, the monopolist will have to charge a price of $\text{price} = dx +(1-d)y$ where $d$ is a discount factor between 0 and 1. This price is such as $dx + (1-d)y < y$.
|
6 |
+
|
7 |
+
Hence by waiting, $Y$ forces the monopolist to compete on price with its future self.
|
8 |
+
|
9 |
+
Imagine there are $n$ consumers with valuations ranging from $y$ to a valuation just above zero. The monopolist will want to sell to the consumer with the lowest valuation. This is because production is costless and by charging a price just above zero it still makes a profit. Hence to separate the consumers, the monopoly will charge first consumer $(1-d^n)y$ where $n$ is the number of consumers. If the discount factor is high enough this price will be close to zero. Hence the conjecture is proved.
|
wiki/wikipedia/1040.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In computer programming, a guard is a boolean expression that must evaluate to true if the program execution is to continue in the branch in question.
|
2 |
+
|
3 |
+
Regardless of which programming language is used, a guard clause, guard code, or guard statement, is a check of integrity preconditions used to avoid errors during execution. A typical example is checking that a reference about to be processed is not null, which avoids null-pointer failures. Other uses include using a boolean field for idempotence (so subsequent calls are nops), as in the dispose pattern. The guard provides an early exit from a subroutine, and is a commonly used deviation from structured programming, removing one level of nesting and resulting in flatter code: replacing <code>if guard { ... }</code> with <code>if not guard: return; ...</code>.
|
4 |
+
|
5 |
+
The term is used with specific meaning in APL, Haskell, Clean, Erlang, occam, Promela, OCaml, Swift, Python from version 3.10, and Scala programming languages. In Mathematica, guards are called constraints. Guards are the fundamental concept in Guarded Command Language, a language in formal methods. Guards can be used to augment pattern matching with the possibility to skip a pattern even if the structure matches. Boolean expressions in conditional statements usually also fit this definition of a guard although they are called conditions.
|
6 |
+
|
7 |
+
In the following Haskell example, the guards occur between each pair of "|" and "=":
|
8 |
+
|
9 |
+
<syntaxhighlight lang="haskell">
|
10 |
+
|
11 |
+
f x
|
12 |
+
|
13 |
+
| x > 0 = 1
|
14 |
+
|
15 |
+
| otherwise = 0
|
16 |
+
|
17 |
+
</syntaxhighlight>
|
18 |
+
|
19 |
+
This is similar to the respective mathematical notation:
|
20 |
+
|
21 |
+
<math>
|
22 |
+
|
23 |
+
f(x) = \left\{ \begin{matrix}
|
24 |
+
|
25 |
+
1 & \mbox{if } x>0 \\
|
26 |
+
|
27 |
+
0 & \mbox{otherwise}
|
28 |
+
|
29 |
+
\end{matrix}
|
30 |
+
|
31 |
+
\right.
|
32 |
+
|
33 |
+
</math>
|
34 |
+
|
35 |
+
In this case the guards are in the "if" and "otherwise" clauses.
|
36 |
+
|
37 |
+
If there are several parallel guards, such as in the example above, they are normally tried in a top-to-bottom order, and the branch of the first to pass is chosen. Guards in a list of cases are typically parallel.
|
38 |
+
|
39 |
+
However, in Haskell list comprehensions the guards are in series, and if any of them fails, the list element is not produced. This would be the same as combining the separate guards with logical AND, except that there can be other list comprehension clauses among the guards.
|
40 |
+
|
41 |
+
A simple conditional expression, already present in CPL in 1963, has a guard on first sub-expression, and another sub-expression to use in case the first one cannot be used. Some common ways to write this:
|
42 |
+
|
43 |
+
(x>0) -> 1/x; 0
|
44 |
+
|
45 |
+
x>0 ? 1/x : 0
|
46 |
+
|
47 |
+
If the second sub-expression can be a further simple conditional expression, we can give more alternatives to try before the last fall-through:
|
48 |
+
|
49 |
+
(x>0) -> 1/x; (x<0) -> -1/x; 0
|
50 |
+
|
51 |
+
In 1966 ISWIM had a form of conditional expression without an obligatory fall-through case, thus separating guard from the concept of choosing either-or. In the case of ISWIM, if none of the alternatives could be used, the value was to be undefined, which was defined to never compute into a value.
|
52 |
+
|
53 |
+
KRC, a "miniaturized version" of SASL (1976), was one of the first programming languages to use the term "guard". Its function definitions could have several clauses, and the one to apply was chosen based on the guards that followed each clause:
|
54 |
+
|
55 |
+
<syntaxhighlight lang="haskell">
|
56 |
+
|
57 |
+
fac n = 1, n = 0
|
58 |
+
|
59 |
+
= n * fac (n-1), n > 0
|
60 |
+
|
61 |
+
</syntaxhighlight>
|
62 |
+
|
63 |
+
Use of guard clauses, and the term "guard clause", dates at least to Smalltalk practice in the 1990s, as codified by Kent Beck.
|
64 |
+
|
65 |
+
In 1996, Dyalog APL adopted an alternative pure functional style in which the guard is the only control structure. This example, in APL, computes the parity of the input number:<syntaxhighlight lang="apl">
|
66 |
+
|
67 |
+
parity←{
|
68 |
+
|
69 |
+
2∣⍵ : 'odd'
|
70 |
+
|
71 |
+
'even'
|
72 |
+
|
73 |
+
}
|
74 |
+
|
75 |
+
</syntaxhighlight>
|
76 |
+
|
77 |
+
In addition to a guard attached to a pattern, pattern guard can refer to the use of pattern matching in the context of a guard. In effect, a match of the pattern is taken to mean pass. This meaning was introduced in a proposal for Haskell by Simon Peyton Jones titled in April 1997 and was used in the implementation of the proposal. The feature provides the ability to use patterns in the guards of a pattern.
|
78 |
+
|
79 |
+
An example in extended Haskell:
|
80 |
+
|
81 |
+
<syntaxhighlight lang="haskell">
|
82 |
+
|
83 |
+
clunky env var1 var2
|
84 |
+
|
85 |
+
| Just val1 <- lookup env var1
|
86 |
+
|
87 |
+
, Just val2 <- lookup env var2
|
88 |
+
|
89 |
+
= val1 + val2
|
90 |
+
|
91 |
+
-- ...other equations for clunky...
|
92 |
+
|
93 |
+
</syntaxhighlight>
|
94 |
+
|
95 |
+
This would read: "Clunky for an environment and two variables, in case the lookups of the variables from the environment produce values, is the sum of the values. ..." As in list comprehensions, the guards are in series, and if any of them fails the branch is not taken.
|
96 |
+
|
97 |
+
<syntaxhighlight lang="csharp">
|
98 |
+
|
99 |
+
public string Foo(string username) {
|
100 |
+
|
101 |
+
if (username == null) {
|
102 |
+
|
103 |
+
throw new ArgumentNullException(nameof(username));
|
104 |
+
|
105 |
+
}
|
106 |
+
|
107 |
+
// Rest of the method code follows here...
|
108 |
+
|
109 |
+
}
|
110 |
+
|
111 |
+
</syntaxhighlight>
|